Test Report: KVM_Linux_crio 19736

                    
                      c03ccee26a80b9ecde7f622e8f7f7412408a7b8a:2024-09-30:36442
                    
                

Test fail (32/311)

Order failed test Duration
33 TestAddons/parallel/Registry 74.04
34 TestAddons/parallel/Ingress 151.84
36 TestAddons/parallel/MetricsServer 331.37
163 TestMultiControlPlane/serial/StopSecondaryNode 141.55
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.6
165 TestMultiControlPlane/serial/RestartSecondaryNode 6.31
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.42
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 399.73
170 TestMultiControlPlane/serial/StopCluster 141.69
230 TestMultiNode/serial/RestartKeepsNodes 324.88
232 TestMultiNode/serial/StopMultiNode 144.6
239 TestPreload 172.95
247 TestKubernetesUpgrade 438.01
283 TestPause/serial/SecondStartNoReconfiguration 76.77
313 TestStartStop/group/old-k8s-version/serial/FirstStart 289.6
339 TestStartStop/group/embed-certs/serial/Stop 139.02
341 TestStartStop/group/no-preload/serial/Stop 139.13
344 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.08
345 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
346 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 79.5
347 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
348 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
355 TestStartStop/group/old-k8s-version/serial/SecondStart 756.51
356 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.21
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.23
358 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.26
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.56
360 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 420.3
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 438.08
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 359.94
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 127.35
x
+
TestAddons/parallel/Registry (74.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.882637ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-frqrv" [e66e6fb9-7274-4a0b-b787-c64abc8ffe04] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003276184s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-m2j7k" [cf0e9fcc-d5e3-4dd8-8337-406b07ab9495] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004110305s
addons_test.go:338: (dbg) Run:  kubectl --context addons-857381 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-857381 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-857381 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.0839881s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-857381 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 ip
2024/09/30 19:50:09 [DEBUG] GET http://192.168.39.16:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-857381 -n addons-857381
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-857381 logs -n 25: (1.346886884s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-816611 | jenkins | v1.34.0 | 30 Sep 24 19:37 UTC |                     |
	|         | -p download-only-816611                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| delete  | -p download-only-816611                                                                     | download-only-816611 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| start   | -o=json --download-only                                                                     | download-only-153563 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC |                     |
	|         | -p download-only-153563                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| delete  | -p download-only-153563                                                                     | download-only-153563 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| delete  | -p download-only-816611                                                                     | download-only-816611 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| delete  | -p download-only-153563                                                                     | download-only-153563 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-728092 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC |                     |
	|         | binary-mirror-728092                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33837                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-728092                                                                     | binary-mirror-728092 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| addons  | disable dashboard -p                                                                        | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC |                     |
	|         | addons-857381                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC |                     |
	|         | addons-857381                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-857381 --wait=true                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:40 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:48 UTC | 30 Sep 24 19:48 UTC |
	|         | -p addons-857381                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | -p addons-857381                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-857381 ssh cat                                                                       | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | /opt/local-path-provisioner/pvc-2b406b11-e501-447a-83ed-ef44d83e41ee_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-857381 addons                                                                        | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-857381 addons                                                                        | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC | 30 Sep 24 19:50 UTC |
	|         | addons-857381                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC |                     |
	|         | addons-857381                                                                               |                      |         |         |                     |                     |
	| ip      | addons-857381 ip                                                                            | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC | 30 Sep 24 19:50 UTC |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC | 30 Sep 24 19:50 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 19:38:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 19:38:39.043134   15584 out.go:345] Setting OutFile to fd 1 ...
	I0930 19:38:39.043248   15584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:38:39.043257   15584 out.go:358] Setting ErrFile to fd 2...
	I0930 19:38:39.043261   15584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:38:39.043448   15584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 19:38:39.044075   15584 out.go:352] Setting JSON to false
	I0930 19:38:39.044883   15584 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1262,"bootTime":1727723857,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 19:38:39.044972   15584 start.go:139] virtualization: kvm guest
	I0930 19:38:39.046933   15584 out.go:177] * [addons-857381] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 19:38:39.048464   15584 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 19:38:39.048463   15584 notify.go:220] Checking for updates...
	I0930 19:38:39.051048   15584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 19:38:39.052632   15584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:38:39.054188   15584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:38:39.055634   15584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 19:38:39.056997   15584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 19:38:39.058475   15584 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 19:38:39.092364   15584 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 19:38:39.093649   15584 start.go:297] selected driver: kvm2
	I0930 19:38:39.093667   15584 start.go:901] validating driver "kvm2" against <nil>
	I0930 19:38:39.093686   15584 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 19:38:39.094418   15584 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:38:39.094502   15584 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 19:38:39.109335   15584 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 19:38:39.109387   15584 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 19:38:39.109649   15584 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 19:38:39.109675   15584 cni.go:84] Creating CNI manager for ""
	I0930 19:38:39.109717   15584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 19:38:39.109725   15584 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 19:38:39.109774   15584 start.go:340] cluster config:
	{Name:addons-857381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:38:39.109868   15584 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:38:39.111680   15584 out.go:177] * Starting "addons-857381" primary control-plane node in "addons-857381" cluster
	I0930 19:38:39.113118   15584 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:38:39.113163   15584 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 19:38:39.113173   15584 cache.go:56] Caching tarball of preloaded images
	I0930 19:38:39.113256   15584 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 19:38:39.113267   15584 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 19:38:39.113567   15584 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/config.json ...
	I0930 19:38:39.113591   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/config.json: {Name:mk4745e18a242e742e59d464f9dbb1a3421bf546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:38:39.113723   15584 start.go:360] acquireMachinesLock for addons-857381: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 19:38:39.113764   15584 start.go:364] duration metric: took 29.496µs to acquireMachinesLock for "addons-857381"
	I0930 19:38:39.113781   15584 start.go:93] Provisioning new machine with config: &{Name:addons-857381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 19:38:39.113835   15584 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 19:38:39.115274   15584 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0930 19:38:39.115408   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:38:39.115446   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:38:39.129988   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44615
	I0930 19:38:39.130433   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:38:39.130969   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:38:39.130987   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:38:39.131382   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:38:39.131591   15584 main.go:141] libmachine: (addons-857381) Calling .GetMachineName
	I0930 19:38:39.131741   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:38:39.131909   15584 start.go:159] libmachine.API.Create for "addons-857381" (driver="kvm2")
	I0930 19:38:39.131936   15584 client.go:168] LocalClient.Create starting
	I0930 19:38:39.131981   15584 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 19:38:39.238349   15584 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 19:38:39.522805   15584 main.go:141] libmachine: Running pre-create checks...
	I0930 19:38:39.522832   15584 main.go:141] libmachine: (addons-857381) Calling .PreCreateCheck
	I0930 19:38:39.523321   15584 main.go:141] libmachine: (addons-857381) Calling .GetConfigRaw
	I0930 19:38:39.523777   15584 main.go:141] libmachine: Creating machine...
	I0930 19:38:39.523791   15584 main.go:141] libmachine: (addons-857381) Calling .Create
	I0930 19:38:39.523944   15584 main.go:141] libmachine: (addons-857381) Creating KVM machine...
	I0930 19:38:39.525343   15584 main.go:141] libmachine: (addons-857381) DBG | found existing default KVM network
	I0930 19:38:39.526113   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:39.525972   15606 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0930 19:38:39.526140   15584 main.go:141] libmachine: (addons-857381) DBG | created network xml: 
	I0930 19:38:39.526149   15584 main.go:141] libmachine: (addons-857381) DBG | <network>
	I0930 19:38:39.526158   15584 main.go:141] libmachine: (addons-857381) DBG |   <name>mk-addons-857381</name>
	I0930 19:38:39.526174   15584 main.go:141] libmachine: (addons-857381) DBG |   <dns enable='no'/>
	I0930 19:38:39.526186   15584 main.go:141] libmachine: (addons-857381) DBG |   
	I0930 19:38:39.526201   15584 main.go:141] libmachine: (addons-857381) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0930 19:38:39.526214   15584 main.go:141] libmachine: (addons-857381) DBG |     <dhcp>
	I0930 19:38:39.526224   15584 main.go:141] libmachine: (addons-857381) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0930 19:38:39.526232   15584 main.go:141] libmachine: (addons-857381) DBG |     </dhcp>
	I0930 19:38:39.526241   15584 main.go:141] libmachine: (addons-857381) DBG |   </ip>
	I0930 19:38:39.526248   15584 main.go:141] libmachine: (addons-857381) DBG |   
	I0930 19:38:39.526254   15584 main.go:141] libmachine: (addons-857381) DBG | </network>
	I0930 19:38:39.526262   15584 main.go:141] libmachine: (addons-857381) DBG | 
	I0930 19:38:39.531685   15584 main.go:141] libmachine: (addons-857381) DBG | trying to create private KVM network mk-addons-857381 192.168.39.0/24...
	I0930 19:38:39.600904   15584 main.go:141] libmachine: (addons-857381) DBG | private KVM network mk-addons-857381 192.168.39.0/24 created
	I0930 19:38:39.600935   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:39.600853   15606 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:38:39.601042   15584 main.go:141] libmachine: (addons-857381) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381 ...
	I0930 19:38:39.601166   15584 main.go:141] libmachine: (addons-857381) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 19:38:39.601204   15584 main.go:141] libmachine: (addons-857381) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 19:38:39.863167   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:39.863034   15606 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa...
	I0930 19:38:40.117906   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:40.117761   15606 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/addons-857381.rawdisk...
	I0930 19:38:40.117931   15584 main.go:141] libmachine: (addons-857381) DBG | Writing magic tar header
	I0930 19:38:40.117940   15584 main.go:141] libmachine: (addons-857381) DBG | Writing SSH key tar header
	I0930 19:38:40.117948   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:40.117879   15606 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381 ...
	I0930 19:38:40.117964   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381
	I0930 19:38:40.118020   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 19:38:40.118027   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:38:40.118038   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381 (perms=drwx------)
	I0930 19:38:40.118045   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 19:38:40.118053   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 19:38:40.118058   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 19:38:40.118064   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 19:38:40.118074   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 19:38:40.118079   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins
	I0930 19:38:40.118085   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 19:38:40.118093   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 19:38:40.118098   15584 main.go:141] libmachine: (addons-857381) Creating domain...
	I0930 19:38:40.118103   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home
	I0930 19:38:40.118110   15584 main.go:141] libmachine: (addons-857381) DBG | Skipping /home - not owner
	I0930 19:38:40.119243   15584 main.go:141] libmachine: (addons-857381) define libvirt domain using xml: 
	I0930 19:38:40.119278   15584 main.go:141] libmachine: (addons-857381) <domain type='kvm'>
	I0930 19:38:40.119287   15584 main.go:141] libmachine: (addons-857381)   <name>addons-857381</name>
	I0930 19:38:40.119298   15584 main.go:141] libmachine: (addons-857381)   <memory unit='MiB'>4000</memory>
	I0930 19:38:40.119306   15584 main.go:141] libmachine: (addons-857381)   <vcpu>2</vcpu>
	I0930 19:38:40.119317   15584 main.go:141] libmachine: (addons-857381)   <features>
	I0930 19:38:40.119329   15584 main.go:141] libmachine: (addons-857381)     <acpi/>
	I0930 19:38:40.119339   15584 main.go:141] libmachine: (addons-857381)     <apic/>
	I0930 19:38:40.119347   15584 main.go:141] libmachine: (addons-857381)     <pae/>
	I0930 19:38:40.119350   15584 main.go:141] libmachine: (addons-857381)     
	I0930 19:38:40.119355   15584 main.go:141] libmachine: (addons-857381)   </features>
	I0930 19:38:40.119360   15584 main.go:141] libmachine: (addons-857381)   <cpu mode='host-passthrough'>
	I0930 19:38:40.119365   15584 main.go:141] libmachine: (addons-857381)   
	I0930 19:38:40.119373   15584 main.go:141] libmachine: (addons-857381)   </cpu>
	I0930 19:38:40.119378   15584 main.go:141] libmachine: (addons-857381)   <os>
	I0930 19:38:40.119383   15584 main.go:141] libmachine: (addons-857381)     <type>hvm</type>
	I0930 19:38:40.119387   15584 main.go:141] libmachine: (addons-857381)     <boot dev='cdrom'/>
	I0930 19:38:40.119394   15584 main.go:141] libmachine: (addons-857381)     <boot dev='hd'/>
	I0930 19:38:40.119399   15584 main.go:141] libmachine: (addons-857381)     <bootmenu enable='no'/>
	I0930 19:38:40.119402   15584 main.go:141] libmachine: (addons-857381)   </os>
	I0930 19:38:40.119407   15584 main.go:141] libmachine: (addons-857381)   <devices>
	I0930 19:38:40.119412   15584 main.go:141] libmachine: (addons-857381)     <disk type='file' device='cdrom'>
	I0930 19:38:40.119420   15584 main.go:141] libmachine: (addons-857381)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/boot2docker.iso'/>
	I0930 19:38:40.119431   15584 main.go:141] libmachine: (addons-857381)       <target dev='hdc' bus='scsi'/>
	I0930 19:38:40.119436   15584 main.go:141] libmachine: (addons-857381)       <readonly/>
	I0930 19:38:40.119440   15584 main.go:141] libmachine: (addons-857381)     </disk>
	I0930 19:38:40.119447   15584 main.go:141] libmachine: (addons-857381)     <disk type='file' device='disk'>
	I0930 19:38:40.119453   15584 main.go:141] libmachine: (addons-857381)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 19:38:40.119460   15584 main.go:141] libmachine: (addons-857381)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/addons-857381.rawdisk'/>
	I0930 19:38:40.119467   15584 main.go:141] libmachine: (addons-857381)       <target dev='hda' bus='virtio'/>
	I0930 19:38:40.119472   15584 main.go:141] libmachine: (addons-857381)     </disk>
	I0930 19:38:40.119476   15584 main.go:141] libmachine: (addons-857381)     <interface type='network'>
	I0930 19:38:40.119482   15584 main.go:141] libmachine: (addons-857381)       <source network='mk-addons-857381'/>
	I0930 19:38:40.119497   15584 main.go:141] libmachine: (addons-857381)       <model type='virtio'/>
	I0930 19:38:40.119547   15584 main.go:141] libmachine: (addons-857381)     </interface>
	I0930 19:38:40.119585   15584 main.go:141] libmachine: (addons-857381)     <interface type='network'>
	I0930 19:38:40.119615   15584 main.go:141] libmachine: (addons-857381)       <source network='default'/>
	I0930 19:38:40.119632   15584 main.go:141] libmachine: (addons-857381)       <model type='virtio'/>
	I0930 19:38:40.119647   15584 main.go:141] libmachine: (addons-857381)     </interface>
	I0930 19:38:40.119657   15584 main.go:141] libmachine: (addons-857381)     <serial type='pty'>
	I0930 19:38:40.119668   15584 main.go:141] libmachine: (addons-857381)       <target port='0'/>
	I0930 19:38:40.119681   15584 main.go:141] libmachine: (addons-857381)     </serial>
	I0930 19:38:40.119692   15584 main.go:141] libmachine: (addons-857381)     <console type='pty'>
	I0930 19:38:40.119705   15584 main.go:141] libmachine: (addons-857381)       <target type='serial' port='0'/>
	I0930 19:38:40.119716   15584 main.go:141] libmachine: (addons-857381)     </console>
	I0930 19:38:40.119728   15584 main.go:141] libmachine: (addons-857381)     <rng model='virtio'>
	I0930 19:38:40.119742   15584 main.go:141] libmachine: (addons-857381)       <backend model='random'>/dev/random</backend>
	I0930 19:38:40.119751   15584 main.go:141] libmachine: (addons-857381)     </rng>
	I0930 19:38:40.119764   15584 main.go:141] libmachine: (addons-857381)     
	I0930 19:38:40.119775   15584 main.go:141] libmachine: (addons-857381)     
	I0930 19:38:40.119787   15584 main.go:141] libmachine: (addons-857381)   </devices>
	I0930 19:38:40.119796   15584 main.go:141] libmachine: (addons-857381) </domain>
	I0930 19:38:40.119808   15584 main.go:141] libmachine: (addons-857381) 
	I0930 19:38:40.152290   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:13:e6:2a in network default
	I0930 19:38:40.152794   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:40.152807   15584 main.go:141] libmachine: (addons-857381) Ensuring networks are active...
	I0930 19:38:40.153769   15584 main.go:141] libmachine: (addons-857381) Ensuring network default is active
	I0930 19:38:40.154084   15584 main.go:141] libmachine: (addons-857381) Ensuring network mk-addons-857381 is active
	I0930 19:38:40.154622   15584 main.go:141] libmachine: (addons-857381) Getting domain xml...
	I0930 19:38:40.155306   15584 main.go:141] libmachine: (addons-857381) Creating domain...
	I0930 19:38:41.750138   15584 main.go:141] libmachine: (addons-857381) Waiting to get IP...
	I0930 19:38:41.750840   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:41.751228   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:41.751257   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:41.751208   15606 retry.go:31] will retry after 219.233908ms: waiting for machine to come up
	I0930 19:38:41.971647   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:41.972164   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:41.972188   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:41.972106   15606 retry.go:31] will retry after 262.030132ms: waiting for machine to come up
	I0930 19:38:42.235394   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:42.235857   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:42.235884   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:42.235807   15606 retry.go:31] will retry after 476.729894ms: waiting for machine to come up
	I0930 19:38:42.714621   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:42.715111   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:42.715165   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:42.715111   15606 retry.go:31] will retry after 585.557ms: waiting for machine to come up
	I0930 19:38:43.301755   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:43.302138   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:43.302170   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:43.302081   15606 retry.go:31] will retry after 660.338313ms: waiting for machine to come up
	I0930 19:38:43.963791   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:43.964219   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:43.964239   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:43.964181   15606 retry.go:31] will retry after 770.621107ms: waiting for machine to come up
	I0930 19:38:44.736897   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:44.737416   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:44.737436   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:44.737400   15606 retry.go:31] will retry after 934.807687ms: waiting for machine to come up
	I0930 19:38:45.673695   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:45.674163   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:45.674192   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:45.674131   15606 retry.go:31] will retry after 1.028873402s: waiting for machine to come up
	I0930 19:38:46.704659   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:46.705228   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:46.705252   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:46.705171   15606 retry.go:31] will retry after 1.355644802s: waiting for machine to come up
	I0930 19:38:48.062629   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:48.063045   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:48.063066   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:48.063003   15606 retry.go:31] will retry after 1.834607389s: waiting for machine to come up
	I0930 19:38:49.899481   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:49.899966   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:49.899993   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:49.899917   15606 retry.go:31] will retry after 2.552900967s: waiting for machine to come up
	I0930 19:38:52.455785   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:52.456329   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:52.456351   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:52.456275   15606 retry.go:31] will retry after 2.738603537s: waiting for machine to come up
	I0930 19:38:55.196845   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:55.197213   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:55.197249   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:55.197206   15606 retry.go:31] will retry after 2.960743363s: waiting for machine to come up
	I0930 19:38:58.161388   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:58.161803   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:58.161831   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:58.161744   15606 retry.go:31] will retry after 3.899735013s: waiting for machine to come up
	I0930 19:39:02.064849   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:02.065350   15584 main.go:141] libmachine: (addons-857381) Found IP for machine: 192.168.39.16
	I0930 19:39:02.065374   15584 main.go:141] libmachine: (addons-857381) Reserving static IP address...
	I0930 19:39:02.065387   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has current primary IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:02.065709   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find host DHCP lease matching {name: "addons-857381", mac: "52:54:00:2f:88:a1", ip: "192.168.39.16"} in network mk-addons-857381
	I0930 19:39:02.140991   15584 main.go:141] libmachine: (addons-857381) DBG | Getting to WaitForSSH function...
	I0930 19:39:02.141024   15584 main.go:141] libmachine: (addons-857381) Reserved static IP address: 192.168.39.16
	I0930 19:39:02.141038   15584 main.go:141] libmachine: (addons-857381) Waiting for SSH to be available...
	I0930 19:39:02.143380   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:02.143712   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381
	I0930 19:39:02.143736   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find defined IP address of network mk-addons-857381 interface with MAC address 52:54:00:2f:88:a1
	I0930 19:39:02.143945   15584 main.go:141] libmachine: (addons-857381) DBG | Using SSH client type: external
	I0930 19:39:02.143968   15584 main.go:141] libmachine: (addons-857381) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa (-rw-------)
	I0930 19:39:02.144015   15584 main.go:141] libmachine: (addons-857381) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 19:39:02.144040   15584 main.go:141] libmachine: (addons-857381) DBG | About to run SSH command:
	I0930 19:39:02.144056   15584 main.go:141] libmachine: (addons-857381) DBG | exit 0
	I0930 19:39:02.155805   15584 main.go:141] libmachine: (addons-857381) DBG | SSH cmd err, output: exit status 255: 
	I0930 19:39:02.155842   15584 main.go:141] libmachine: (addons-857381) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0930 19:39:02.155850   15584 main.go:141] libmachine: (addons-857381) DBG | command : exit 0
	I0930 19:39:02.155855   15584 main.go:141] libmachine: (addons-857381) DBG | err     : exit status 255
	I0930 19:39:02.155862   15584 main.go:141] libmachine: (addons-857381) DBG | output  : 
	I0930 19:39:05.156591   15584 main.go:141] libmachine: (addons-857381) DBG | Getting to WaitForSSH function...
	I0930 19:39:05.159112   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.159471   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.159499   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.159674   15584 main.go:141] libmachine: (addons-857381) DBG | Using SSH client type: external
	I0930 19:39:05.159702   15584 main.go:141] libmachine: (addons-857381) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa (-rw-------)
	I0930 19:39:05.159734   15584 main.go:141] libmachine: (addons-857381) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 19:39:05.159746   15584 main.go:141] libmachine: (addons-857381) DBG | About to run SSH command:
	I0930 19:39:05.159755   15584 main.go:141] libmachine: (addons-857381) DBG | exit 0
	I0930 19:39:05.283731   15584 main.go:141] libmachine: (addons-857381) DBG | SSH cmd err, output: <nil>: 
	I0930 19:39:05.283945   15584 main.go:141] libmachine: (addons-857381) KVM machine creation complete!
	I0930 19:39:05.284267   15584 main.go:141] libmachine: (addons-857381) Calling .GetConfigRaw
	I0930 19:39:05.284805   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:05.285019   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:05.285141   15584 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 19:39:05.285158   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:05.286683   15584 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 19:39:05.286697   15584 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 19:39:05.286701   15584 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 19:39:05.286707   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.288834   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.289132   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.289157   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.289280   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.289449   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.289572   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.289690   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.289873   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.290039   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.290050   15584 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 19:39:05.386984   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:39:05.387014   15584 main.go:141] libmachine: Detecting the provisioner...
	I0930 19:39:05.387029   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.389409   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.389748   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.389776   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.389917   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.390074   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.390198   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.390305   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.390448   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.390666   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.390682   15584 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 19:39:05.492417   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 19:39:05.492481   15584 main.go:141] libmachine: found compatible host: buildroot
	I0930 19:39:05.492489   15584 main.go:141] libmachine: Provisioning with buildroot...
	I0930 19:39:05.492500   15584 main.go:141] libmachine: (addons-857381) Calling .GetMachineName
	I0930 19:39:05.492732   15584 buildroot.go:166] provisioning hostname "addons-857381"
	I0930 19:39:05.492757   15584 main.go:141] libmachine: (addons-857381) Calling .GetMachineName
	I0930 19:39:05.492945   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.495929   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.496239   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.496305   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.496439   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.496644   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.496802   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.496952   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.497104   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.497271   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.497285   15584 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-857381 && echo "addons-857381" | sudo tee /etc/hostname
	I0930 19:39:05.609891   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-857381
	
	I0930 19:39:05.609922   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.612978   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.613698   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.613729   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.613907   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.614121   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.614279   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.614423   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.614594   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.614753   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.614769   15584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-857381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-857381/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-857381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 19:39:05.725738   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:39:05.725765   15584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 19:39:05.725804   15584 buildroot.go:174] setting up certificates
	I0930 19:39:05.725819   15584 provision.go:84] configureAuth start
	I0930 19:39:05.725827   15584 main.go:141] libmachine: (addons-857381) Calling .GetMachineName
	I0930 19:39:05.726168   15584 main.go:141] libmachine: (addons-857381) Calling .GetIP
	I0930 19:39:05.728742   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.729007   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.729035   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.729182   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.731678   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.732051   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.732081   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.732153   15584 provision.go:143] copyHostCerts
	I0930 19:39:05.732229   15584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 19:39:05.732358   15584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 19:39:05.732435   15584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 19:39:05.732484   15584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.addons-857381 san=[127.0.0.1 192.168.39.16 addons-857381 localhost minikube]
	I0930 19:39:05.797657   15584 provision.go:177] copyRemoteCerts
	I0930 19:39:05.797735   15584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 19:39:05.797762   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.800885   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.801217   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.801247   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.801400   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.801568   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.801718   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.801822   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:05.882191   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 19:39:05.905511   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 19:39:05.929051   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 19:39:05.954162   15584 provision.go:87] duration metric: took 228.330604ms to configureAuth
	I0930 19:39:05.954201   15584 buildroot.go:189] setting minikube options for container-runtime
	I0930 19:39:05.954387   15584 config.go:182] Loaded profile config "addons-857381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:39:05.954466   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.957503   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.957900   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.957927   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.958152   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.958347   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.958489   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.958608   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.958729   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.958887   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.958901   15584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 19:39:06.179208   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 19:39:06.179237   15584 main.go:141] libmachine: Checking connection to Docker...
	I0930 19:39:06.179248   15584 main.go:141] libmachine: (addons-857381) Calling .GetURL
	I0930 19:39:06.180601   15584 main.go:141] libmachine: (addons-857381) DBG | Using libvirt version 6000000
	I0930 19:39:06.182691   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.183033   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.183061   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.183191   15584 main.go:141] libmachine: Docker is up and running!
	I0930 19:39:06.183202   15584 main.go:141] libmachine: Reticulating splines...
	I0930 19:39:06.183209   15584 client.go:171] duration metric: took 27.051264777s to LocalClient.Create
	I0930 19:39:06.183231   15584 start.go:167] duration metric: took 27.051324774s to libmachine.API.Create "addons-857381"
	I0930 19:39:06.183242   15584 start.go:293] postStartSetup for "addons-857381" (driver="kvm2")
	I0930 19:39:06.183251   15584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 19:39:06.183266   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.183524   15584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 19:39:06.183571   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:06.185444   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.185797   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.185827   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.185919   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:06.186090   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.186188   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:06.186312   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:06.266715   15584 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 19:39:06.271185   15584 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 19:39:06.271215   15584 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 19:39:06.271287   15584 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 19:39:06.271309   15584 start.go:296] duration metric: took 88.062379ms for postStartSetup
	I0930 19:39:06.271349   15584 main.go:141] libmachine: (addons-857381) Calling .GetConfigRaw
	I0930 19:39:06.271937   15584 main.go:141] libmachine: (addons-857381) Calling .GetIP
	I0930 19:39:06.274448   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.274725   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.274750   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.274965   15584 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/config.json ...
	I0930 19:39:06.275129   15584 start.go:128] duration metric: took 27.161285737s to createHost
	I0930 19:39:06.275152   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:06.277424   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.277710   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.277737   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.277888   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:06.278053   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.278193   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.278321   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:06.278484   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:06.278724   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:06.278743   15584 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 19:39:06.380303   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727725146.359081243
	
	I0930 19:39:06.380326   15584 fix.go:216] guest clock: 1727725146.359081243
	I0930 19:39:06.380335   15584 fix.go:229] Guest: 2024-09-30 19:39:06.359081243 +0000 UTC Remote: 2024-09-30 19:39:06.275140075 +0000 UTC m=+27.266281521 (delta=83.941168ms)
	I0930 19:39:06.380381   15584 fix.go:200] guest clock delta is within tolerance: 83.941168ms
	I0930 19:39:06.380389   15584 start.go:83] releasing machines lock for "addons-857381", held for 27.266614473s
	I0930 19:39:06.380419   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.380674   15584 main.go:141] libmachine: (addons-857381) Calling .GetIP
	I0930 19:39:06.383237   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.383611   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.383640   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.383823   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.384318   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.384453   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.384548   15584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 19:39:06.384593   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:06.384651   15584 ssh_runner.go:195] Run: cat /version.json
	I0930 19:39:06.384672   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:06.387480   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.387761   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.387940   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.387970   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.388102   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:06.388230   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.388258   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.388321   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.388433   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:06.388508   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:06.388576   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.388649   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:06.388688   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:06.388794   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:06.460622   15584 ssh_runner.go:195] Run: systemctl --version
	I0930 19:39:06.504333   15584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 19:39:06.659157   15584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 19:39:06.665831   15584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 19:39:06.665921   15584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 19:39:06.682297   15584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 19:39:06.682332   15584 start.go:495] detecting cgroup driver to use...
	I0930 19:39:06.682422   15584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 19:39:06.698736   15584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 19:39:06.713403   15584 docker.go:217] disabling cri-docker service (if available) ...
	I0930 19:39:06.713463   15584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 19:39:06.727772   15584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 19:39:06.741754   15584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 19:39:06.854558   15584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 19:39:07.016805   15584 docker.go:233] disabling docker service ...
	I0930 19:39:07.016868   15584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 19:39:07.031392   15584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 19:39:07.044268   15584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 19:39:07.174815   15584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 19:39:07.288136   15584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 19:39:07.302494   15584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 19:39:07.320346   15584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 19:39:07.320397   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.330567   15584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 19:39:07.330642   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.340540   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.351066   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.361313   15584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 19:39:07.372112   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.382428   15584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.398996   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.409216   15584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 19:39:07.418760   15584 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 19:39:07.418816   15584 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 19:39:07.433137   15584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 19:39:07.442882   15584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:39:07.558112   15584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 19:39:07.649794   15584 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 19:39:07.649899   15584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 19:39:07.654623   15584 start.go:563] Will wait 60s for crictl version
	I0930 19:39:07.654704   15584 ssh_runner.go:195] Run: which crictl
	I0930 19:39:07.658191   15584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 19:39:07.700342   15584 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 19:39:07.700458   15584 ssh_runner.go:195] Run: crio --version
	I0930 19:39:07.727470   15584 ssh_runner.go:195] Run: crio --version
	I0930 19:39:07.754761   15584 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 19:39:07.756216   15584 main.go:141] libmachine: (addons-857381) Calling .GetIP
	I0930 19:39:07.758595   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:07.758998   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:07.759028   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:07.759215   15584 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 19:39:07.763302   15584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:39:07.775047   15584 kubeadm.go:883] updating cluster {Name:addons-857381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 19:39:07.775168   15584 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:39:07.775210   15584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:39:07.807313   15584 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 19:39:07.807388   15584 ssh_runner.go:195] Run: which lz4
	I0930 19:39:07.811181   15584 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 19:39:07.815355   15584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 19:39:07.815401   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 19:39:09.011857   15584 crio.go:462] duration metric: took 1.20070674s to copy over tarball
	I0930 19:39:09.011922   15584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 19:39:11.156167   15584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.144208659s)
	I0930 19:39:11.156197   15584 crio.go:469] duration metric: took 2.144313315s to extract the tarball
	I0930 19:39:11.156204   15584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 19:39:11.192433   15584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:39:11.233108   15584 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 19:39:11.233132   15584 cache_images.go:84] Images are preloaded, skipping loading
	I0930 19:39:11.233139   15584 kubeadm.go:934] updating node { 192.168.39.16 8443 v1.31.1 crio true true} ...
	I0930 19:39:11.233269   15584 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-857381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 19:39:11.233352   15584 ssh_runner.go:195] Run: crio config
	I0930 19:39:11.277191   15584 cni.go:84] Creating CNI manager for ""
	I0930 19:39:11.277215   15584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 19:39:11.277225   15584 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 19:39:11.277248   15584 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-857381 NodeName:addons-857381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 19:39:11.277363   15584 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-857381"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 19:39:11.277418   15584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 19:39:11.286642   15584 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 19:39:11.286704   15584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 19:39:11.295548   15584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0930 19:39:11.311549   15584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 19:39:11.331985   15584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0930 19:39:11.348728   15584 ssh_runner.go:195] Run: grep 192.168.39.16	control-plane.minikube.internal$ /etc/hosts
	I0930 19:39:11.352327   15584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:39:11.364401   15584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:39:11.481660   15584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 19:39:11.497079   15584 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381 for IP: 192.168.39.16
	I0930 19:39:11.497100   15584 certs.go:194] generating shared ca certs ...
	I0930 19:39:11.497116   15584 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.497260   15584 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 19:39:11.648998   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt ...
	I0930 19:39:11.649025   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt: {Name:mk6e5f82ec05fd1020277cb50e5cfcc0dabcacae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.649213   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key ...
	I0930 19:39:11.649229   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key: {Name:mk0ef923818a162097b78148b543208a914b5bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.649322   15584 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 19:39:11.753260   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt ...
	I0930 19:39:11.753290   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt: {Name:mke9d528b1a86f83c00d6802b8724e9dc7fcbf2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.753464   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key ...
	I0930 19:39:11.753479   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key: {Name:mk8d6f919cfde9b2ba252ed4e645dd7abe933692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.753574   15584 certs.go:256] generating profile certs ...
	I0930 19:39:11.753638   15584 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.key
	I0930 19:39:11.753663   15584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt with IP's: []
	I0930 19:39:11.993825   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt ...
	I0930 19:39:11.993862   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: {Name:mkfdecb09e1eaad0bf5d023250541bd133526bf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.994031   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.key ...
	I0930 19:39:11.994043   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.key: {Name:mk5b3d09b580d0cb32db7795505ff42b338bebcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.994106   15584 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key.2630616d
	I0930 19:39:11.994123   15584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt.2630616d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.16]
	I0930 19:39:12.123421   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt.2630616d ...
	I0930 19:39:12.123454   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt.2630616d: {Name:mk0c51fdbf5c30101d513ddc20b36e402092303f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:12.123638   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key.2630616d ...
	I0930 19:39:12.123655   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key.2630616d: {Name:mk22e6929637babbf135e841e671bfe79d76bb0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:12.123725   15584 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt.2630616d -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt
	I0930 19:39:12.123793   15584 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key.2630616d -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key
	I0930 19:39:12.123839   15584 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.key
	I0930 19:39:12.123854   15584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.crt with IP's: []
	I0930 19:39:12.195319   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.crt ...
	I0930 19:39:12.195350   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.crt: {Name:mk713b9e40199aa6c8687b380ad01559be53ec34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:12.195497   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.key ...
	I0930 19:39:12.195507   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.key: {Name:mkea90975034f67fe95bb6a85ec32c0ef43e68e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:12.195696   15584 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 19:39:12.195729   15584 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 19:39:12.195751   15584 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 19:39:12.195774   15584 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 19:39:12.196294   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 19:39:12.223952   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 19:39:12.246370   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 19:39:12.279886   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 19:39:12.303029   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0930 19:39:12.325838   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 19:39:12.349163   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 19:39:12.372806   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 19:39:12.396187   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 19:39:12.420192   15584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 19:39:12.436976   15584 ssh_runner.go:195] Run: openssl version
	I0930 19:39:12.442204   15584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 19:39:12.452601   15584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:39:12.456833   15584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:39:12.456888   15584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:39:12.462315   15584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 19:39:12.472654   15584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 19:39:12.476710   15584 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 19:39:12.476772   15584 kubeadm.go:392] StartCluster: {Name:addons-857381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:39:12.476843   15584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 19:39:12.476890   15584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 19:39:12.509454   15584 cri.go:89] found id: ""
	I0930 19:39:12.509518   15584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 19:39:12.519690   15584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 19:39:12.528634   15584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 19:39:12.537558   15584 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 19:39:12.537580   15584 kubeadm.go:157] found existing configuration files:
	
	I0930 19:39:12.537627   15584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 19:39:12.546562   15584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 19:39:12.546615   15584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 19:39:12.555210   15584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 19:39:12.563709   15584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 19:39:12.563764   15584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 19:39:12.572594   15584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 19:39:12.580936   15584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 19:39:12.580987   15584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 19:39:12.589574   15584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 19:39:12.597837   15584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 19:39:12.597888   15584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 19:39:12.606734   15584 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 19:39:12.656495   15584 kubeadm.go:310] W0930 19:39:12.641183     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:39:12.657151   15584 kubeadm.go:310] W0930 19:39:12.642020     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:39:12.764273   15584 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 19:39:22.111607   15584 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 19:39:22.111685   15584 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 19:39:22.111776   15584 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 19:39:22.111893   15584 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 19:39:22.112027   15584 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 19:39:22.112104   15584 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 19:39:22.113710   15584 out.go:235]   - Generating certificates and keys ...
	I0930 19:39:22.113790   15584 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 19:39:22.113862   15584 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 19:39:22.113958   15584 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 19:39:22.114050   15584 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 19:39:22.114143   15584 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 19:39:22.114222   15584 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 19:39:22.114302   15584 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 19:39:22.114414   15584 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-857381 localhost] and IPs [192.168.39.16 127.0.0.1 ::1]
	I0930 19:39:22.114460   15584 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 19:39:22.114592   15584 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-857381 localhost] and IPs [192.168.39.16 127.0.0.1 ::1]
	I0930 19:39:22.114664   15584 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 19:39:22.114748   15584 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 19:39:22.114814   15584 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 19:39:22.114901   15584 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 19:39:22.114973   15584 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 19:39:22.115058   15584 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 19:39:22.115139   15584 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 19:39:22.115211   15584 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 19:39:22.115281   15584 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 19:39:22.115360   15584 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 19:39:22.115417   15584 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 19:39:22.116907   15584 out.go:235]   - Booting up control plane ...
	I0930 19:39:22.116999   15584 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 19:39:22.117066   15584 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 19:39:22.117129   15584 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 19:39:22.117234   15584 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 19:39:22.117369   15584 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 19:39:22.117427   15584 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 19:39:22.117597   15584 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 19:39:22.117746   15584 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 19:39:22.117827   15584 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.864878ms
	I0930 19:39:22.117935   15584 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 19:39:22.118041   15584 kubeadm.go:310] [api-check] The API server is healthy after 5.00170551s
	I0930 19:39:22.118221   15584 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 19:39:22.118406   15584 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 19:39:22.118481   15584 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 19:39:22.118679   15584 kubeadm.go:310] [mark-control-plane] Marking the node addons-857381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 19:39:22.118753   15584 kubeadm.go:310] [bootstrap-token] Using token: 2zqthc.qj6bpwsk1i25jfw6
	I0930 19:39:22.120480   15584 out.go:235]   - Configuring RBAC rules ...
	I0930 19:39:22.120608   15584 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 19:39:22.120680   15584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 19:39:22.120802   15584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 19:39:22.120917   15584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 19:39:22.121021   15584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 19:39:22.121095   15584 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 19:39:22.121200   15584 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 19:39:22.121239   15584 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 19:39:22.121286   15584 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 19:39:22.121292   15584 kubeadm.go:310] 
	I0930 19:39:22.121363   15584 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 19:39:22.121375   15584 kubeadm.go:310] 
	I0930 19:39:22.121489   15584 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 19:39:22.121521   15584 kubeadm.go:310] 
	I0930 19:39:22.121561   15584 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 19:39:22.121648   15584 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 19:39:22.121728   15584 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 19:39:22.121740   15584 kubeadm.go:310] 
	I0930 19:39:22.121818   15584 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 19:39:22.121825   15584 kubeadm.go:310] 
	I0930 19:39:22.121895   15584 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 19:39:22.121904   15584 kubeadm.go:310] 
	I0930 19:39:22.121982   15584 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 19:39:22.122058   15584 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 19:39:22.122127   15584 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 19:39:22.122134   15584 kubeadm.go:310] 
	I0930 19:39:22.122209   15584 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 19:39:22.122279   15584 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 19:39:22.122285   15584 kubeadm.go:310] 
	I0930 19:39:22.122360   15584 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2zqthc.qj6bpwsk1i25jfw6 \
	I0930 19:39:22.122450   15584 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 19:39:22.122473   15584 kubeadm.go:310] 	--control-plane 
	I0930 19:39:22.122482   15584 kubeadm.go:310] 
	I0930 19:39:22.122556   15584 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 19:39:22.122562   15584 kubeadm.go:310] 
	I0930 19:39:22.122633   15584 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2zqthc.qj6bpwsk1i25jfw6 \
	I0930 19:39:22.122742   15584 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 19:39:22.122753   15584 cni.go:84] Creating CNI manager for ""
	I0930 19:39:22.122760   15584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 19:39:22.124276   15584 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 19:39:22.125392   15584 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 19:39:22.137298   15584 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 19:39:22.159047   15584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 19:39:22.159160   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:22.159174   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-857381 minikube.k8s.io/updated_at=2024_09_30T19_39_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=addons-857381 minikube.k8s.io/primary=true
	I0930 19:39:22.178203   15584 ops.go:34] apiserver oom_adj: -16
	I0930 19:39:22.298845   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:22.799840   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:23.299680   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:23.799875   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:24.298916   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:24.799796   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:25.299026   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:25.799660   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:25.868472   15584 kubeadm.go:1113] duration metric: took 3.709383377s to wait for elevateKubeSystemPrivileges
	I0930 19:39:25.868505   15584 kubeadm.go:394] duration metric: took 13.391737223s to StartCluster
	I0930 19:39:25.868523   15584 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:25.868662   15584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:39:25.869112   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:25.869296   15584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 19:39:25.869324   15584 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 19:39:25.869370   15584 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0930 19:39:25.869469   15584 addons.go:69] Setting gcp-auth=true in profile "addons-857381"
	I0930 19:39:25.869486   15584 addons.go:69] Setting ingress-dns=true in profile "addons-857381"
	I0930 19:39:25.869501   15584 addons.go:234] Setting addon ingress-dns=true in "addons-857381"
	I0930 19:39:25.869494   15584 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-857381"
	I0930 19:39:25.869513   15584 addons.go:69] Setting registry=true in profile "addons-857381"
	I0930 19:39:25.869513   15584 addons.go:69] Setting cloud-spanner=true in profile "addons-857381"
	I0930 19:39:25.869525   15584 addons.go:69] Setting metrics-server=true in profile "addons-857381"
	I0930 19:39:25.869535   15584 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-857381"
	I0930 19:39:25.869536   15584 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-857381"
	I0930 19:39:25.869543   15584 addons.go:234] Setting addon cloud-spanner=true in "addons-857381"
	I0930 19:39:25.869551   15584 addons.go:69] Setting inspektor-gadget=true in profile "addons-857381"
	I0930 19:39:25.869553   15584 addons.go:69] Setting volumesnapshots=true in profile "addons-857381"
	I0930 19:39:25.869554   15584 addons.go:69] Setting storage-provisioner=true in profile "addons-857381"
	I0930 19:39:25.869565   15584 addons.go:234] Setting addon inspektor-gadget=true in "addons-857381"
	I0930 19:39:25.869565   15584 addons.go:234] Setting addon volumesnapshots=true in "addons-857381"
	I0930 19:39:25.869582   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869588   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869601   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869505   15584 mustload.go:65] Loading cluster: addons-857381
	I0930 19:39:25.869549   15584 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-857381"
	I0930 19:39:25.869775   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869847   15584 config.go:182] Loaded profile config "addons-857381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:39:25.870033   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870035   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870078   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870100   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.869567   15584 addons.go:234] Setting addon storage-provisioner=true in "addons-857381"
	I0930 19:39:25.870132   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870145   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869529   15584 addons.go:234] Setting addon registry=true in "addons-857381"
	I0930 19:39:25.870175   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870197   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870083   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870195   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869511   15584 config.go:182] Loaded profile config "addons-857381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:39:25.870526   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.869544   15584 addons.go:69] Setting volcano=true in profile "addons-857381"
	I0930 19:39:25.870546   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870557   15584 addons.go:234] Setting addon volcano=true in "addons-857381"
	I0930 19:39:25.870583   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869482   15584 addons.go:69] Setting ingress=true in profile "addons-857381"
	I0930 19:39:25.870706   15584 addons.go:234] Setting addon ingress=true in "addons-857381"
	I0930 19:39:25.870739   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.870748   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870773   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870897   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870911   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.871085   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.871115   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.869473   15584 addons.go:69] Setting yakd=true in profile "addons-857381"
	I0930 19:39:25.871269   15584 addons.go:234] Setting addon yakd=true in "addons-857381"
	I0930 19:39:25.871297   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869520   15584 addons.go:69] Setting default-storageclass=true in profile "addons-857381"
	I0930 19:39:25.871410   15584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-857381"
	I0930 19:39:25.871679   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.871704   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.869539   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869545   15584 addons.go:234] Setting addon metrics-server=true in "addons-857381"
	I0930 19:39:25.871938   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.872087   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.872111   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.872268   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.872297   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.869546   15584 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-857381"
	I0930 19:39:25.869552   15584 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-857381"
	I0930 19:39:25.870118   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.873240   15584 out.go:177] * Verifying Kubernetes components...
	I0930 19:39:25.874824   15584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:39:25.875031   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.875068   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870165   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.875837   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.891609   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I0930 19:39:25.891622   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0930 19:39:25.892198   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.892648   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.892839   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.892856   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.892958   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34113
	I0930 19:39:25.893205   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.893224   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.893339   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.893526   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.893609   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.893925   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.893942   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.893985   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.894012   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.894209   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.894231   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.894604   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.896401   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32887
	I0930 19:39:25.901911   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34897
	I0930 19:39:25.908027   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.908062   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.908658   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.908681   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.910137   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36075
	I0930 19:39:25.910232   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38099
	I0930 19:39:25.910381   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.910420   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.910689   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.910814   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.910889   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.911356   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.911384   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.911518   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.911547   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.911704   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.911720   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.911760   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35065
	I0930 19:39:25.912108   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.912153   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.912245   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.912754   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.912787   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.913013   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.913047   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.913204   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.913221   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.913281   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.913621   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.914224   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.914247   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.919833   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.920758   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.920793   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.928106   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.928373   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.930483   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.930920   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.930971   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.943442   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34069
	I0930 19:39:25.946158   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0930 19:39:25.946301   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42649
	I0930 19:39:25.946399   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.947919   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.947941   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.948022   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.948109   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37203
	I0930 19:39:25.948121   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.948168   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.948220   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0930 19:39:25.948395   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45111
	I0930 19:39:25.949364   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.949469   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.949482   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.949486   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.949535   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.950004   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.950017   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.950055   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.950147   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.950154   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.950161   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.950173   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.950552   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.950566   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.950629   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.951116   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.951576   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.951610   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.951746   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.951981   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.952074   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.952099   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.952588   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.953272   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.953294   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.953679   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.953882   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.954158   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.954184   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.954412   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.955485   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.955737   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:25.955751   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:25.955806   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.956180   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:25.956201   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:25.956207   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:25.956216   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:25.957588   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:25.957390   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0930 19:39:25.957452   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41277
	I0930 19:39:25.957946   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:25.957983   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:25.957992   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	W0930 19:39:25.958081   15584 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0930 19:39:25.958401   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.958881   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.958900   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.958987   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42313
	I0930 19:39:25.959289   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.959314   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.959474   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0930 19:39:25.959492   15584 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0930 19:39:25.959513   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.959875   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.959897   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.960126   15584 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0930 19:39:25.960524   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.960672   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.961838   15584 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0930 19:39:25.961855   15584 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0930 19:39:25.961885   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.962881   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.962921   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.965353   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.967465   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.967720   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.967752   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.967998   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.968211   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.968229   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.968253   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.968412   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.968456   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.968558   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:25.968871   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.969023   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.969358   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:25.969828   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I0930 19:39:25.971542   15584 addons.go:234] Setting addon default-storageclass=true in "addons-857381"
	I0930 19:39:25.971578   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.971945   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.971965   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.973722   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0930 19:39:25.974115   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45175
	I0930 19:39:25.974519   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.974915   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.975095   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.975108   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.975433   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.975634   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.975824   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.976012   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.976033   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.976430   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.976444   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.976501   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.976683   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.977028   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.977624   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.977661   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.977877   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.979689   15584 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-857381"
	I0930 19:39:25.979733   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.980117   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.980151   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.981658   15584 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0930 19:39:25.982583   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I0930 19:39:25.983098   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.983567   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43789
	I0930 19:39:25.983865   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.983878   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.984274   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.984379   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.984563   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.984759   15584 out.go:177]   - Using image docker.io/registry:2.8.3
	I0930 19:39:25.984836   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.984863   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.985186   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.985334   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.986318   15584 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0930 19:39:25.986335   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0930 19:39:25.986353   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.987060   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.987776   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0930 19:39:25.988280   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.988862   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.988877   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.988935   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.989074   15584 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0930 19:39:25.989812   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.990023   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.990033   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.990473   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.990510   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.990574   15584 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 19:39:25.990597   15584 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 19:39:25.990617   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.991173   15584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 19:39:25.991455   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.991620   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.991751   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.991860   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:25.993542   15584 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 19:39:25.993741   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 19:39:25.993761   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.993705   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.994528   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.995054   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.995071   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.995363   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.995558   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.995716   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.995862   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:25.996207   15584 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0930 19:39:25.997530   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.997597   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0930 19:39:25.997617   15584 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0930 19:39:25.997635   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.997905   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.997931   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.998174   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.998350   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.998496   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.998614   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.001113   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.001606   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.001633   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.001819   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.001978   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.002102   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.002213   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.002507   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35615
	I0930 19:39:26.003016   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.003573   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.003590   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.004001   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.004290   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.007901   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
	I0930 19:39:26.007985   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I0930 19:39:26.008624   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.009653   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0930 19:39:26.010668   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.010726   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.011079   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.011091   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.011295   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.011657   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.011732   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.011763   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.012575   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.012669   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0930 19:39:26.012829   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.013000   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.013407   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.013606   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.013621   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.013968   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.014049   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.014065   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.014119   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.014353   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.014494   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.014944   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.015656   15584 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0930 19:39:26.016134   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.016798   15584 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0930 19:39:26.017425   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.017622   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34427
	I0930 19:39:26.017897   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.018270   15584 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0930 19:39:26.018286   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0930 19:39:26.018301   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.018271   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.018352   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.018646   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.018937   15584 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0930 19:39:26.018974   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0930 19:39:26.019146   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:26.019175   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:26.019458   15584 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 19:39:26.019469   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0930 19:39:26.019480   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.022308   15584 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 19:39:26.022318   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0930 19:39:26.022462   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.023468   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.023512   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.023547   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.023574   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40001
	I0930 19:39:26.023698   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.023999   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.024081   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.024161   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.024178   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.024276   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.024400   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.024502   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.024632   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.025111   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0930 19:39:26.025197   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.025201   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.025212   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.025377   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.025647   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.025709   15584 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 19:39:26.025818   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.026733   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38173
	I0930 19:39:26.027178   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.028031   15584 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 19:39:26.028049   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0930 19:39:26.028119   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.028131   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.028181   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.028202   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.028442   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0930 19:39:26.029148   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.029701   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:26.029741   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:26.030064   15584 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0930 19:39:26.031125   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.031427   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0930 19:39:26.031525   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.031567   15584 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 19:39:26.031571   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.031579   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0930 19:39:26.031598   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.031737   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.031852   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.032014   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.032136   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.034693   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0930 19:39:26.035043   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.035464   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.035521   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.035730   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.035883   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.035993   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.036170   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.037151   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0930 19:39:26.038304   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0930 19:39:26.039572   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0930 19:39:26.039593   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0930 19:39:26.039616   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.042725   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.043135   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.043161   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.043322   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.043504   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.043649   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.043779   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.046214   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42533
	I0930 19:39:26.046708   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.047211   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.047230   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.047643   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34181
	I0930 19:39:26.047658   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.047829   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.048012   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.048450   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.048463   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.048874   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.049079   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.049587   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.049871   15584 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 19:39:26.049894   15584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 19:39:26.049910   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.050844   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.053693   15584 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0930 19:39:26.053892   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.054150   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.054175   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.054350   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.054606   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.054743   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.054898   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.057159   15584 out.go:177]   - Using image docker.io/busybox:stable
	I0930 19:39:26.058444   15584 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 19:39:26.058456   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0930 19:39:26.058471   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	W0930 19:39:26.058658   15584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34418->192.168.39.16:22: read: connection reset by peer
	I0930 19:39:26.058676   15584 retry.go:31] will retry after 237.78819ms: ssh: handshake failed: read tcp 192.168.39.1:34418->192.168.39.16:22: read: connection reset by peer
	I0930 19:39:26.061619   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.061962   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.062006   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.062106   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.062224   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.062300   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.062361   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	W0930 19:39:26.065959   15584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34426->192.168.39.16:22: read: connection reset by peer
	I0930 19:39:26.065979   15584 retry.go:31] will retry after 167.277624ms: ssh: handshake failed: read tcp 192.168.39.1:34426->192.168.39.16:22: read: connection reset by peer
	I0930 19:39:26.339466   15584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 19:39:26.339517   15584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 19:39:26.403846   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0930 19:39:26.403877   15584 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0930 19:39:26.418875   15584 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0930 19:39:26.418902   15584 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0930 19:39:26.444724   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 19:39:26.469397   15584 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0930 19:39:26.469428   15584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0930 19:39:26.470418   15584 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 19:39:26.470454   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0930 19:39:26.484974   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0930 19:39:26.490665   15584 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0930 19:39:26.490690   15584 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0930 19:39:26.517120   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 19:39:26.544379   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 19:39:26.563968   15584 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0930 19:39:26.563993   15584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0930 19:39:26.604180   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0930 19:39:26.604208   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0930 19:39:26.620313   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 19:39:26.672698   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0930 19:39:26.672723   15584 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0930 19:39:26.688307   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 19:39:26.714792   15584 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0930 19:39:26.714816   15584 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0930 19:39:26.728893   15584 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 19:39:26.728920   15584 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 19:39:26.744719   15584 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0930 19:39:26.744745   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0930 19:39:26.842193   15584 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0930 19:39:26.842218   15584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0930 19:39:26.859317   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 19:39:26.899446   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0930 19:39:26.899471   15584 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0930 19:39:26.904707   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0930 19:39:26.904731   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0930 19:39:26.961885   15584 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0930 19:39:26.961904   15584 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0930 19:39:26.962165   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0930 19:39:26.962184   15584 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0930 19:39:26.977061   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0930 19:39:27.039064   15584 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 19:39:27.039095   15584 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 19:39:27.067135   15584 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 19:39:27.067165   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0930 19:39:27.144070   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0930 19:39:27.144093   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0930 19:39:27.181844   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 19:39:27.204338   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0930 19:39:27.204364   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0930 19:39:27.262301   15584 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0930 19:39:27.262328   15584 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0930 19:39:27.319423   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 19:39:27.366509   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0930 19:39:27.366531   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0930 19:39:27.474305   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0930 19:39:27.577560   15584 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0930 19:39:27.577589   15584 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0930 19:39:27.717753   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0930 19:39:27.717785   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0930 19:39:27.874602   15584 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0930 19:39:27.874633   15584 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0930 19:39:27.969590   15584 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0930 19:39:27.969615   15584 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0930 19:39:28.141702   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0930 19:39:28.141732   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0930 19:39:28.341745   15584 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 19:39:28.341776   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0930 19:39:28.455162   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0930 19:39:28.455188   15584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0930 19:39:28.678401   15584 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.338898628s)
	I0930 19:39:28.678417   15584 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.338851725s)
	I0930 19:39:28.678450   15584 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0930 19:39:28.679459   15584 node_ready.go:35] waiting up to 6m0s for node "addons-857381" to be "Ready" ...
	I0930 19:39:28.692964   15584 node_ready.go:49] node "addons-857381" has status "Ready":"True"
	I0930 19:39:28.693006   15584 node_ready.go:38] duration metric: took 13.512917ms for node "addons-857381" to be "Ready" ...
	I0930 19:39:28.693018   15584 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 19:39:28.694835   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 19:39:28.724666   15584 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace to be "Ready" ...
	I0930 19:39:28.817994   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0930 19:39:28.818022   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0930 19:39:29.132262   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0930 19:39:29.132290   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0930 19:39:29.194565   15584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-857381" context rescaled to 1 replicas
	I0930 19:39:29.322176   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 19:39:29.322196   15584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0930 19:39:29.581322   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 19:39:30.236110   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.751106656s)
	I0930 19:39:30.236157   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236166   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236216   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.719062545s)
	I0930 19:39:30.236266   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236287   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236293   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.691892299s)
	I0930 19:39:30.236308   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236318   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236701   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.236710   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.236724   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.236732   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.236735   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.236742   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236746   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.236750   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236752   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.236754   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236761   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.236770   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236772   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.236762   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236906   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.792152494s)
	I0930 19:39:30.236927   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236955   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.237054   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.237074   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.237097   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.237099   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.237107   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.237108   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.236777   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.238459   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.238460   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.238486   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.238495   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.238502   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.238496   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.238513   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.238523   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.238750   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.238766   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.238817   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.745068   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:32.778531   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:33.027172   15584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0930 19:39:33.027218   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:33.031039   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:33.031563   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:33.031606   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:33.031748   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:33.031947   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:33.032091   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:33.032216   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:33.310796   15584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0930 19:39:33.432989   15584 addons.go:234] Setting addon gcp-auth=true in "addons-857381"
	I0930 19:39:33.433075   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:33.433505   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:33.433542   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:33.450114   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
	I0930 19:39:33.450542   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:33.451073   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:33.451091   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:33.451989   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:33.452643   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:33.452678   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:33.467603   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0930 19:39:33.468080   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:33.468533   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:33.468552   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:33.468882   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:33.469131   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:33.470845   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:33.471095   15584 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0930 19:39:33.471131   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:33.473943   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:33.474399   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:33.474457   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:33.474555   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:33.474733   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:33.474879   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:33.475055   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:34.292964   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.672612289s)
	I0930 19:39:34.293018   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293031   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293110   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.604771882s)
	I0930 19:39:34.293148   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293160   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.433811665s)
	I0930 19:39:34.293184   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293196   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293161   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293304   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.111420616s)
	W0930 19:39:34.293345   15584 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 19:39:34.293376   15584 retry.go:31] will retry after 271.524616ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 19:39:34.293201   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.316113203s)
	I0930 19:39:34.293411   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293416   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.293425   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293425   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.293435   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.293443   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293449   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293531   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.293542   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.293553   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293561   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293579   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.293558   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.974102674s)
	I0930 19:39:34.293609   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.819279733s)
	I0930 19:39:34.293623   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293629   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293637   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293640   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293652   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.293625   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.293675   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.293680   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.293684   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293688   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.293692   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293697   15584 addons.go:475] Verifying addon ingress=true in "addons-857381"
	I0930 19:39:34.293758   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.598892526s)
	I0930 19:39:34.293777   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294035   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.294048   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.294075   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294081   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294089   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294095   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.294103   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294111   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294121   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294128   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.294135   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.294152   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294158   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294343   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.294367   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294374   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294390   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294397   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.294437   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.294456   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294462   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294469   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294482   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.295624   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.295658   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.295665   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.296494   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.296522   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.296528   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.296878   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.296887   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.296895   15584 addons.go:475] Verifying addon registry=true in "addons-857381"
	I0930 19:39:34.296919   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.296931   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.297440   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.297455   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.296941   15584 addons.go:475] Verifying addon metrics-server=true in "addons-857381"
	I0930 19:39:34.299354   15584 out.go:177] * Verifying ingress addon...
	I0930 19:39:34.299415   15584 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-857381 service yakd-dashboard -n yakd-dashboard
	
	I0930 19:39:34.299358   15584 out.go:177] * Verifying registry addon...
	I0930 19:39:34.301748   15584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0930 19:39:34.303967   15584 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0930 19:39:34.347114   15584 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 19:39:34.347135   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:34.347645   15584 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0930 19:39:34.347667   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:34.379293   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.379322   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.379589   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.379665   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.379683   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	W0930 19:39:34.379773   15584 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0930 19:39:34.391480   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.391514   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.391850   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.391871   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.565511   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 19:39:34.806600   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:34.810513   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:35.232349   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:35.308666   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:35.309108   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:35.828683   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.247295259s)
	I0930 19:39:35.828738   15584 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.357617005s)
	I0930 19:39:35.828744   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:35.828881   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:35.829247   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:35.829301   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:35.829316   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:35.829324   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:35.829631   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:35.829656   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:35.829663   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:35.829671   15584 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-857381"
	I0930 19:39:35.830414   15584 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0930 19:39:35.831442   15584 out.go:177] * Verifying csi-hostpath-driver addon...
	I0930 19:39:35.833074   15584 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 19:39:35.834046   15584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0930 19:39:35.834254   15584 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0930 19:39:35.834271   15584 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0930 19:39:35.839940   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:35.840343   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:35.847244   15584 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 19:39:35.847276   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:35.938617   15584 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0930 19:39:35.938652   15584 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0930 19:39:36.063928   15584 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 19:39:36.063961   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0930 19:39:36.120314   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 19:39:36.309391   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:36.314236   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:36.340348   15584 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 19:39:36.340371   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:36.804872   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.239314953s)
	I0930 19:39:36.804918   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:36.804933   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:36.805171   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:36.805189   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:36.805199   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:36.805208   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:36.805433   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:36.805454   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:36.967227   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:36.967460   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:36.967876   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:37.247223   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:37.307184   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:37.314533   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:37.345378   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:37.526802   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.406437983s)
	I0930 19:39:37.526855   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:37.526879   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:37.527198   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:37.527257   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:37.527271   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:37.527280   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:37.527210   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:37.527501   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:37.527522   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:37.529551   15584 addons.go:475] Verifying addon gcp-auth=true in "addons-857381"
	I0930 19:39:37.531033   15584 out.go:177] * Verifying gcp-auth addon...
	I0930 19:39:37.533661   15584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0930 19:39:37.562401   15584 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 19:39:37.562432   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:37.806737   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:37.809253   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:37.839020   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:38.038065   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:38.305905   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:38.309675   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:38.339300   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:38.537175   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:38.807194   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:38.808182   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:38.839444   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:39.038213   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:39.305965   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:39.307430   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:39.339933   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:39.538121   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:39.731775   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:39.806783   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:39.808801   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:39.839365   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:40.037438   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:40.306846   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:40.308993   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:40.338409   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:40.538055   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:40.806222   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:40.808300   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:40.843451   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:41.038963   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:41.227711   15584 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jn2h5" not found
	I0930 19:39:41.227748   15584 pod_ready.go:82] duration metric: took 12.503044527s for pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace to be "Ready" ...
	E0930 19:39:41.227761   15584 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jn2h5" not found
	I0930 19:39:41.227771   15584 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace to be "Ready" ...
	I0930 19:39:41.308109   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:41.309908   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:41.338978   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:41.537501   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:41.808520   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:41.809542   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:41.840311   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:42.148099   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:42.306741   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:42.308939   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:42.338534   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:42.537098   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:42.805061   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:42.807375   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:42.838837   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:43.037381   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:43.234216   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:43.305308   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:43.308022   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:43.339943   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:43.537233   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:43.805707   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:43.811783   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:43.839510   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:44.037858   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:44.306420   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:44.308934   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:44.338485   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:44.537622   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:44.806844   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:44.808702   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:44.838957   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:45.036848   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:45.234876   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:45.306328   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:45.308712   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:45.343763   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:45.536859   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:45.806211   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:45.808798   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:45.839561   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:46.037708   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:46.308046   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:46.308610   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:46.339634   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:46.537600   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:46.805549   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:46.807820   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:46.838167   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:47.037473   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:47.306050   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:47.308153   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:47.339967   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:47.537051   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:47.734887   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:47.813723   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:47.814301   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:47.840811   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:48.038333   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:48.311855   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:48.312416   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:48.341988   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:48.537651   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:48.806200   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:48.809450   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:48.838999   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:49.037711   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:49.305793   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:49.307907   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:49.339445   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:49.537409   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:49.806209   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:49.808533   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:49.839853   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:50.037854   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:50.234421   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:50.306910   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:50.308611   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:50.339584   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:50.546089   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:50.806461   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:50.808559   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:50.839824   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:51.037595   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:51.305471   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:51.308222   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:51.338416   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:51.537082   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:51.806079   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:51.809149   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:51.838774   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:52.037195   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:52.236908   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:52.307438   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:52.309988   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:52.339786   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:52.539520   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:52.807714   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:52.811031   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:52.839082   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:53.037682   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:53.305629   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:53.307981   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:53.338463   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:53.537098   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:53.806021   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:53.810331   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:53.838769   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:54.091895   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:54.306715   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:54.308449   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:54.338829   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:54.540280   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:54.734396   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:54.805806   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:54.808652   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:54.838947   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:55.037868   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:55.305594   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:55.308020   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:55.338849   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:55.537911   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:55.805987   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:55.808899   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:55.839439   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:56.038492   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:56.316176   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:56.316378   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:56.340370   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:56.538344   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:56.734461   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:56.806516   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:56.809839   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:56.839171   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:57.038430   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:57.305462   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:57.307742   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:57.340252   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:57.537058   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:57.806338   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:57.808421   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:57.839125   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:58.037542   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:58.306156   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:58.307603   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:58.339349   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:58.538543   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:58.734586   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:58.807381   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:58.809120   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:58.908109   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:59.037847   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:59.306124   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:59.307264   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:59.338804   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:59.537010   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:59.806260   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:59.808807   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:59.839439   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:00.036904   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:00.306219   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:00.308277   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:00.339116   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:00.538595   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:00.735277   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:40:00.808141   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:00.808374   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:00.838895   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:01.037765   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:01.306325   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:01.309240   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:01.338334   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:01.540483   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:01.805905   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:01.808599   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:01.856980   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:02.038458   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:02.306037   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:02.308480   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:02.338925   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:02.537489   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:02.806720   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:02.809311   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:02.839215   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:03.038706   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:03.235095   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:40:03.305605   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:03.308118   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:03.339088   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:03.537176   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:03.806049   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:03.808024   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:03.840285   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:04.047284   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:04.234184   15584 pod_ready.go:93] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.234214   15584 pod_ready.go:82] duration metric: took 23.006434066s for pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.234227   15584 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.238876   15584 pod_ready.go:93] pod "etcd-addons-857381" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.238896   15584 pod_ready.go:82] duration metric: took 4.661667ms for pod "etcd-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.238905   15584 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.243161   15584 pod_ready.go:93] pod "kube-apiserver-addons-857381" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.243185   15584 pod_ready.go:82] duration metric: took 4.272909ms for pod "kube-apiserver-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.243204   15584 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.247507   15584 pod_ready.go:93] pod "kube-controller-manager-addons-857381" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.247544   15584 pod_ready.go:82] duration metric: took 4.329628ms for pod "kube-controller-manager-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.247558   15584 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wgjdg" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.253066   15584 pod_ready.go:93] pod "kube-proxy-wgjdg" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.253097   15584 pod_ready.go:82] duration metric: took 5.523ms for pod "kube-proxy-wgjdg" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.253108   15584 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.305855   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:04.308368   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:04.338826   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:04.537032   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:04.632342   15584 pod_ready.go:93] pod "kube-scheduler-addons-857381" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.632365   15584 pod_ready.go:82] duration metric: took 379.250879ms for pod "kube-scheduler-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.632374   15584 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9vf5l" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.805742   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:04.808493   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:04.838704   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:05.032445   15584 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9vf5l" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:05.032469   15584 pod_ready.go:82] duration metric: took 400.088015ms for pod "nvidia-device-plugin-daemonset-9vf5l" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:05.032476   15584 pod_ready.go:39] duration metric: took 36.339446224s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 19:40:05.032494   15584 api_server.go:52] waiting for apiserver process to appear ...
	I0930 19:40:05.032544   15584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 19:40:05.037739   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:05.077269   15584 api_server.go:72] duration metric: took 39.20789395s to wait for apiserver process to appear ...
	I0930 19:40:05.077297   15584 api_server.go:88] waiting for apiserver healthz status ...
	I0930 19:40:05.077318   15584 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0930 19:40:05.081429   15584 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0930 19:40:05.082415   15584 api_server.go:141] control plane version: v1.31.1
	I0930 19:40:05.082441   15584 api_server.go:131] duration metric: took 5.135906ms to wait for apiserver health ...
	I0930 19:40:05.082450   15584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 19:40:05.248118   15584 system_pods.go:59] 17 kube-system pods found
	I0930 19:40:05.248151   15584 system_pods.go:61] "coredns-7c65d6cfc9-v2sl5" [7ef3332d-3ee7-4d76-bbef-2dfc99673515] Running
	I0930 19:40:05.248159   15584 system_pods.go:61] "csi-hostpath-attacher-0" [e77d98c4-0779-493d-b89f-2fbd4a41b6ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 19:40:05.248165   15584 system_pods.go:61] "csi-hostpath-resizer-0" [e32a8d15-973d-404b-9619-491fa27decc4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 19:40:05.248173   15584 system_pods.go:61] "csi-hostpathplugin-mlgws" [2f7276d7-5e87-4d2e-bd1a-6e104f3fd164] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 19:40:05.248178   15584 system_pods.go:61] "etcd-addons-857381" [74fe1626-8e74-435e-a2dd-f088265d04ac] Running
	I0930 19:40:05.248182   15584 system_pods.go:61] "kube-apiserver-addons-857381" [74358463-31fa-4b2f-ba36-4d0c4f5b03db] Running
	I0930 19:40:05.248185   15584 system_pods.go:61] "kube-controller-manager-addons-857381" [155182cf-78af-450c-923a-dfeb7b2a5358] Running
	I0930 19:40:05.248191   15584 system_pods.go:61] "kube-ingress-dns-minikube" [e1217c30-4e9c-43fa-a3f6-0a640781c5f8] Running
	I0930 19:40:05.248194   15584 system_pods.go:61] "kube-proxy-wgjdg" [b2646cb6-ecf8-4e44-9d48-b49eead7d727] Running
	I0930 19:40:05.248197   15584 system_pods.go:61] "kube-scheduler-addons-857381" [952cc18b-d292-4baa-8a03-dce05fdabe5c] Running
	I0930 19:40:05.248204   15584 system_pods.go:61] "metrics-server-84c5f94fbc-cdn25" [b344652c-decb-4b68-9eb4-dd034008cf98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 19:40:05.248207   15584 system_pods.go:61] "nvidia-device-plugin-daemonset-9vf5l" [f2848172-eec4-47cc-9e9d-36026e22b55c] Running
	I0930 19:40:05.248211   15584 system_pods.go:61] "registry-66c9cd494c-frqrv" [e66e6fb9-7274-4a0b-b787-c64abc8ffe04] Running
	I0930 19:40:05.248216   15584 system_pods.go:61] "registry-proxy-m2j7k" [cf0e9fcc-d5e3-4dd8-8337-406b07ab9495] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 19:40:05.248223   15584 system_pods.go:61] "snapshot-controller-56fcc65765-g26cx" [0a7563fa-d127-473c-b9a1-ece459d51ec0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 19:40:05.248256   15584 system_pods.go:61] "snapshot-controller-56fcc65765-vqjbn" [68d33976-a421-4696-83a7-303c2bf65ba3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 19:40:05.248264   15584 system_pods.go:61] "storage-provisioner" [cf253e6d-52dd-4bbf-a505-61269b1bb4d1] Running
	I0930 19:40:05.248271   15584 system_pods.go:74] duration metric: took 165.811366ms to wait for pod list to return data ...
	I0930 19:40:05.248282   15584 default_sa.go:34] waiting for default service account to be created ...
	I0930 19:40:05.319334   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:05.321630   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:05.349289   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:05.432684   15584 default_sa.go:45] found service account: "default"
	I0930 19:40:05.432711   15584 default_sa.go:55] duration metric: took 184.42325ms for default service account to be created ...
	I0930 19:40:05.432720   15584 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 19:40:05.537876   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:05.637325   15584 system_pods.go:86] 17 kube-system pods found
	I0930 19:40:05.637354   15584 system_pods.go:89] "coredns-7c65d6cfc9-v2sl5" [7ef3332d-3ee7-4d76-bbef-2dfc99673515] Running
	I0930 19:40:05.637363   15584 system_pods.go:89] "csi-hostpath-attacher-0" [e77d98c4-0779-493d-b89f-2fbd4a41b6ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 19:40:05.637368   15584 system_pods.go:89] "csi-hostpath-resizer-0" [e32a8d15-973d-404b-9619-491fa27decc4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 19:40:05.637376   15584 system_pods.go:89] "csi-hostpathplugin-mlgws" [2f7276d7-5e87-4d2e-bd1a-6e104f3fd164] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 19:40:05.637380   15584 system_pods.go:89] "etcd-addons-857381" [74fe1626-8e74-435e-a2dd-f088265d04ac] Running
	I0930 19:40:05.637384   15584 system_pods.go:89] "kube-apiserver-addons-857381" [74358463-31fa-4b2f-ba36-4d0c4f5b03db] Running
	I0930 19:40:05.637387   15584 system_pods.go:89] "kube-controller-manager-addons-857381" [155182cf-78af-450c-923a-dfeb7b2a5358] Running
	I0930 19:40:05.637392   15584 system_pods.go:89] "kube-ingress-dns-minikube" [e1217c30-4e9c-43fa-a3f6-0a640781c5f8] Running
	I0930 19:40:05.637395   15584 system_pods.go:89] "kube-proxy-wgjdg" [b2646cb6-ecf8-4e44-9d48-b49eead7d727] Running
	I0930 19:40:05.637399   15584 system_pods.go:89] "kube-scheduler-addons-857381" [952cc18b-d292-4baa-8a03-dce05fdabe5c] Running
	I0930 19:40:05.637405   15584 system_pods.go:89] "metrics-server-84c5f94fbc-cdn25" [b344652c-decb-4b68-9eb4-dd034008cf98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 19:40:05.637410   15584 system_pods.go:89] "nvidia-device-plugin-daemonset-9vf5l" [f2848172-eec4-47cc-9e9d-36026e22b55c] Running
	I0930 19:40:05.637416   15584 system_pods.go:89] "registry-66c9cd494c-frqrv" [e66e6fb9-7274-4a0b-b787-c64abc8ffe04] Running
	I0930 19:40:05.637423   15584 system_pods.go:89] "registry-proxy-m2j7k" [cf0e9fcc-d5e3-4dd8-8337-406b07ab9495] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 19:40:05.637433   15584 system_pods.go:89] "snapshot-controller-56fcc65765-g26cx" [0a7563fa-d127-473c-b9a1-ece459d51ec0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 19:40:05.637446   15584 system_pods.go:89] "snapshot-controller-56fcc65765-vqjbn" [68d33976-a421-4696-83a7-303c2bf65ba3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 19:40:05.637453   15584 system_pods.go:89] "storage-provisioner" [cf253e6d-52dd-4bbf-a505-61269b1bb4d1] Running
	I0930 19:40:05.637460   15584 system_pods.go:126] duration metric: took 204.735253ms to wait for k8s-apps to be running ...
	I0930 19:40:05.637471   15584 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 19:40:05.637512   15584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 19:40:05.664635   15584 system_svc.go:56] duration metric: took 27.157381ms WaitForService to wait for kubelet
	I0930 19:40:05.664667   15584 kubeadm.go:582] duration metric: took 39.795308561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 19:40:05.664684   15584 node_conditions.go:102] verifying NodePressure condition ...
	I0930 19:40:05.806621   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:05.809736   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:05.833501   15584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 19:40:05.833531   15584 node_conditions.go:123] node cpu capacity is 2
	I0930 19:40:05.833544   15584 node_conditions.go:105] duration metric: took 168.855642ms to run NodePressure ...
	I0930 19:40:05.833558   15584 start.go:241] waiting for startup goroutines ...
	I0930 19:40:05.838853   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:06.201378   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:06.305678   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:06.309215   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:06.338426   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:06.537088   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:06.805556   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:06.807670   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:06.837888   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:07.037594   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:07.306997   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:07.308373   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:07.339605   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:07.537323   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:07.806225   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:07.808962   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:07.840424   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:08.038714   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:08.315435   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:08.316984   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:08.338567   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:08.539077   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:08.806404   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:08.807794   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:08.838111   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:09.039411   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:09.306781   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:09.308706   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:09.338817   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:09.541907   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:09.806151   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:09.808679   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:09.839864   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:10.037757   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:10.306476   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:10.309294   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:10.338729   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:10.537365   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:10.806186   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:10.808553   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:10.838954   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:11.038197   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:11.305362   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:11.307868   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:11.338450   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:11.537023   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:11.805980   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:11.807997   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:11.838687   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:12.038101   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:12.305891   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:12.308058   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:12.338527   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:12.537006   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:12.805026   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:12.807440   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:12.838745   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:13.036973   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:13.316029   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:13.316819   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:13.339318   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:13.537656   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:13.806393   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:13.809221   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:13.838943   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:14.036710   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:14.305575   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:14.307510   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:14.339024   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:14.746118   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:14.805546   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:14.808182   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:14.839255   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:15.038456   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:15.306259   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:15.308763   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:15.338218   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:15.537663   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:15.806502   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:15.809322   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:15.838920   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:16.038201   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:16.305842   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:16.308119   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:16.338442   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:16.536865   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:16.806565   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:16.809083   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:16.839057   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:17.037476   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:17.306218   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:17.308220   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:17.338656   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:17.538612   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:17.806377   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:17.808904   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:17.838105   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:18.037920   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:18.306007   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:18.308381   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:18.338711   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:18.537393   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:18.806335   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:18.809582   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:18.840209   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:19.036945   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:19.306469   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:19.308307   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:19.338954   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:19.537674   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:19.806934   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:19.808546   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:19.839444   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:20.037215   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:20.305907   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:20.308689   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:20.339344   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:20.538374   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:20.808450   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:20.808767   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:20.839145   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:21.037658   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:21.306332   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:21.310114   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:21.341224   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:21.537216   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:21.806169   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:21.808637   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:21.842275   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:22.038267   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:22.305922   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:22.308301   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:22.342967   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:22.537729   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:22.810668   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:22.811005   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:22.839120   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:23.037454   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:23.306993   15584 kapi.go:107] duration metric: took 49.005242803s to wait for kubernetes.io/minikube-addons=registry ...
	I0930 19:40:23.308292   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:23.340880   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:23.537538   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:23.808649   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:23.838719   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:24.037027   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:24.311020   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:24.339930   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:24.537448   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:24.808165   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:24.840330   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:25.038012   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:25.310485   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:25.338594   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:25.537562   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:25.808768   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:25.840491   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:26.337884   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:26.339802   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:26.342878   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:26.538146   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:26.810441   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:26.911692   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:27.037138   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:27.307981   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:27.338514   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:27.537541   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:27.808034   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:27.838767   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:28.037949   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:28.315914   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:28.346567   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:28.539119   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:28.808853   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:28.838437   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:29.036989   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:29.308729   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:29.339702   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:29.537814   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:29.808942   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:29.841777   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:30.038084   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:30.307636   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:30.339110   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:30.538667   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:30.808685   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:30.838911   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:31.037786   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:31.309187   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:31.338193   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:31.538062   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:31.810154   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:31.844570   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:32.036891   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:32.309059   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:32.338920   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:32.538629   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:32.811819   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:32.840003   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:33.298376   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:33.314136   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:33.405537   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:33.536782   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:33.810211   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:33.838557   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:34.038758   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:34.308572   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:34.338993   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:34.538664   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:34.809265   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:34.838824   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:35.038820   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:35.309811   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:35.338667   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:35.538473   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:35.809185   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:35.840427   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:36.037848   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:36.309172   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:36.344741   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:36.537522   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:36.815421   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:36.846933   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:37.038118   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:37.307913   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:37.339870   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:37.545907   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:37.809630   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:37.838804   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:38.036948   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:38.319878   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:38.342775   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:38.537998   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:38.809824   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:38.915083   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:39.041765   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:39.309331   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:39.342044   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:39.537640   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:39.808078   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:39.838346   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:40.036732   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:40.309104   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:40.338364   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:40.544312   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:40.808442   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:40.909737   15584 kapi.go:107] duration metric: took 1m5.075684221s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0930 19:40:41.037117   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:41.307717   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:41.538444   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:41.808544   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:42.037764   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:42.308953   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:42.538432   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:42.808497   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:43.038173   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:43.309165   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:43.537280   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:43.808012   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:44.037523   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:44.308211   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:45.043029   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:45.043273   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:45.047140   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:45.308014   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:45.537537   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:45.808735   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:46.037888   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:46.309235   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:46.537513   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:46.808314   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:47.038548   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:47.308644   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:47.538083   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:47.807931   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:48.038183   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:48.308144   15584 kapi.go:107] duration metric: took 1m14.004175846s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0930 19:40:48.538107   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:49.038498   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:49.537789   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:50.038155   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:50.613944   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:51.038032   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:51.537506   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:52.040616   15584 kapi.go:107] duration metric: took 1m14.506956805s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0930 19:40:52.041976   15584 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-857381 cluster.
	I0930 19:40:52.043243   15584 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0930 19:40:52.044410   15584 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0930 19:40:52.045758   15584 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0930 19:40:52.046831   15584 addons.go:510] duration metric: took 1m26.177460547s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0930 19:40:52.046869   15584 start.go:246] waiting for cluster config update ...
	I0930 19:40:52.046883   15584 start.go:255] writing updated cluster config ...
	I0930 19:40:52.047117   15584 ssh_runner.go:195] Run: rm -f paused
	I0930 19:40:52.098683   15584 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 19:40:52.100271   15584 out.go:177] * Done! kubectl is now configured to use "addons-857381" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.545609594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727725810545399389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:524209,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=449325f4-38e5-431d-be81-d9a874bf63f6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.547042552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d40586f-db13-49cd-949d-608641552f36 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.547298884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d40586f-db13-49cd-949d-608641552f36 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.548031797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed0ef0c37aa10445b5b20c6e4e08f971d0959639883af771def3f3ee899e6770,PodSandboxId:cab0655c8c501e5fbd2f4fb43bc5478bf361374e0fdbaa9b0649058f8c84e917,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727725788970878126,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-2b406b11-e501-447a-83ed-ef44d83e41ee,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ffe27fbe-f98e-422c-8543-b0df39ee4c28,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fd88b9faaa7714b48a4b7fc924557af57f3f0f70c0d55b12cb6d594da63f54,PodSandboxId:b76e74399d169c5950192411eb82bd7ccf1f807abcd088292c23d936592c8bc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727725785573775036,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cea76bb6-9c73-43ae-8a4b-9e2ae12f5ae0,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe98dd70d0355aaeadd486e0b8190f9daf049f376ab06db1d663a2aa2a512c1,PodSandboxId:30e6f619b85c6a68275a9a25eb1670098d690434a8f7e2aeb4758203623ce28f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727725778685724938,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-2b406b11-e501-447a-83ed-ef44d83e41ee,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a6072416-e714-495a-9019-5c4cd9f37cbb,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef,PodSandboxId:2927b71f84ff3f76f3a52a1aecbd72a68cfa19e0cdca879f3210c117c839294f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727725251528262837,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-scvnm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 5e438281-5451-4290-8c50-14fb79a66185,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2d0f08874d9e73873481108ad4b7c2ace12dbf72ff01f34def4fc1e5cfff5d,PodSandboxId:688bd1bfa229439100a9354e8a964323ff03263a06e3c5df7e83b4a73875b57c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727725247049962052,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-9bfzl,io.kubernetes.pod.namespace: ingress-ngin
x,io.kubernetes.pod.uid: f16cd6ff-05a8-47e5-963e-ef20ce165eeb,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:46f33863b6d216c85078337b5eefc34ba3141590e24ec8b9dfbb21d10595b84e,PodSandboxId:3e88376f8e4f3c5da30623befddc798d3597e97f13199051087ad81a73199883,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fada
ef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226573713791,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cgdc6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 81717421-6023-4cfb-acff-733a7ea02838,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831ffd5c60190ad65b735f6a1c699bb486f24c54379a56cc2a077aac0eb4c325,PodSandboxId:f002aa1c3285a2c33f423dfce6f5f97d16dbd6ad2adcb4888ad0d38d814ac293,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0
e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226432061841,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qv7n8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8162826e-db14-46b9-93f2-456169ccfb0d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606aacc25dc7b552db41badd2b01633126455a44368d823c8566b200abc0836b,PodSandboxId:98be2d636c186daf56eee2476248b6f4ffa19bdc3cbc77538eaad093a82c00e0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/ranc
her/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727725218077557047,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-cb4rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7df0eb14-f1ff-4d87-a485-efb580a3304b,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013,PodSandboxId:5d866c50845926549f01df87a9908307213fc5caa20603d75bdd4c898c23d1c3,Metadata:&ContainerMetadata{Name:met
rics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727725209633050557,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cdn25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b344652c-decb-4b68-9eb4-dd034008cf98,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbbc7c85eaec24fb4d15cf79a7766331aec956
ce9799202bccf45c4baadd4428,PodSandboxId:a13890b89820ab8ea08ffa95c7aac76bf27d1c8594dd5f3b2d6bc4ea6ae958f9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727725187794389049,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1217c30-4e9c-43fa-a3f6-0a640781c5f8,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab,PodSandboxId:44e738ed93b01a10a8ff2fe7b585def59079d101143e4555486329cd7fcc73b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727725171524308003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf253e6d-52dd-4bbf-a505-61269b1bb4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec,PodSandboxId:7264dffbc56c756580b1699b46a98d026060043f7ded85528176c4468f3e54d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727725169673865152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2sl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ef3332d-3ee7-4d76-bbef-2dfc99673515,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a,PodSandboxId:cbd8bbc0b830527874fdbef734642c050e7e6a62986ee8cdf383f82424b3b1c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727725167873622399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wgjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2646cb6-ecf8-4e44-9d48-b49eead7d727,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67,PodSandboxId:472730560a69cb865a7de097b81e5d7c46896bf3dfef03d491afa5c9add05b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727725156408359954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509234ffc60223733ef52b2009dbce73,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb,PodSandboxId:45990caa9ec749761565324cc3ffda13e0181f617a83701013fa0c2c91467ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727725156391153567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462c1efc125130690ce0abe7c0d6a433,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377,PodSandboxId:f599de907322667aeed83b2705fea682b338d49da5ee13de1790e02e7e4e8a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727725156395714900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c22ddcce59702bad76d277171c4f1a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e,PodSandboxId:329303fea433cc4c43cb1ec6a4a7d52fafbb483b77613fefca8466b49fcac7b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727725156374738044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aaf74d96d0249f06846b94c74ecc9cd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d40586f-db13-49cd-949d-608641552f36 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.581279996Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7298546c-d8c2-41f5-974a-cfd1d642ce28 name=/runtime.v1.RuntimeService/Version
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.581367423Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7298546c-d8c2-41f5-974a-cfd1d642ce28 name=/runtime.v1.RuntimeService/Version
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.582317561Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3f0c3f6-9811-497a-909e-a7258580298b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.583380030Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727725810583353248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:524209,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3f0c3f6-9811-497a-909e-a7258580298b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.584075299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67144460-85ac-464c-8c04-6b4e25cca794 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.584143601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67144460-85ac-464c-8c04-6b4e25cca794 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.584651182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed0ef0c37aa10445b5b20c6e4e08f971d0959639883af771def3f3ee899e6770,PodSandboxId:cab0655c8c501e5fbd2f4fb43bc5478bf361374e0fdbaa9b0649058f8c84e917,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727725788970878126,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-2b406b11-e501-447a-83ed-ef44d83e41ee,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ffe27fbe-f98e-422c-8543-b0df39ee4c28,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fd88b9faaa7714b48a4b7fc924557af57f3f0f70c0d55b12cb6d594da63f54,PodSandboxId:b76e74399d169c5950192411eb82bd7ccf1f807abcd088292c23d936592c8bc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727725785573775036,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cea76bb6-9c73-43ae-8a4b-9e2ae12f5ae0,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe98dd70d0355aaeadd486e0b8190f9daf049f376ab06db1d663a2aa2a512c1,PodSandboxId:30e6f619b85c6a68275a9a25eb1670098d690434a8f7e2aeb4758203623ce28f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727725778685724938,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-2b406b11-e501-447a-83ed-ef44d83e41ee,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a6072416-e714-495a-9019-5c4cd9f37cbb,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef,PodSandboxId:2927b71f84ff3f76f3a52a1aecbd72a68cfa19e0cdca879f3210c117c839294f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727725251528262837,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-scvnm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 5e438281-5451-4290-8c50-14fb79a66185,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2d0f08874d9e73873481108ad4b7c2ace12dbf72ff01f34def4fc1e5cfff5d,PodSandboxId:688bd1bfa229439100a9354e8a964323ff03263a06e3c5df7e83b4a73875b57c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727725247049962052,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-9bfzl,io.kubernetes.pod.namespace: ingress-ngin
x,io.kubernetes.pod.uid: f16cd6ff-05a8-47e5-963e-ef20ce165eeb,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:46f33863b6d216c85078337b5eefc34ba3141590e24ec8b9dfbb21d10595b84e,PodSandboxId:3e88376f8e4f3c5da30623befddc798d3597e97f13199051087ad81a73199883,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fada
ef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226573713791,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cgdc6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 81717421-6023-4cfb-acff-733a7ea02838,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831ffd5c60190ad65b735f6a1c699bb486f24c54379a56cc2a077aac0eb4c325,PodSandboxId:f002aa1c3285a2c33f423dfce6f5f97d16dbd6ad2adcb4888ad0d38d814ac293,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0
e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226432061841,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qv7n8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8162826e-db14-46b9-93f2-456169ccfb0d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606aacc25dc7b552db41badd2b01633126455a44368d823c8566b200abc0836b,PodSandboxId:98be2d636c186daf56eee2476248b6f4ffa19bdc3cbc77538eaad093a82c00e0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/ranc
her/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727725218077557047,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-cb4rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7df0eb14-f1ff-4d87-a485-efb580a3304b,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013,PodSandboxId:5d866c50845926549f01df87a9908307213fc5caa20603d75bdd4c898c23d1c3,Metadata:&ContainerMetadata{Name:met
rics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727725209633050557,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cdn25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b344652c-decb-4b68-9eb4-dd034008cf98,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbbc7c85eaec24fb4d15cf79a7766331aec956
ce9799202bccf45c4baadd4428,PodSandboxId:a13890b89820ab8ea08ffa95c7aac76bf27d1c8594dd5f3b2d6bc4ea6ae958f9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727725187794389049,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1217c30-4e9c-43fa-a3f6-0a640781c5f8,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab,PodSandboxId:44e738ed93b01a10a8ff2fe7b585def59079d101143e4555486329cd7fcc73b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727725171524308003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf253e6d-52dd-4bbf-a505-61269b1bb4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec,PodSandboxId:7264dffbc56c756580b1699b46a98d026060043f7ded85528176c4468f3e54d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727725169673865152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2sl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ef3332d-3ee7-4d76-bbef-2dfc99673515,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a,PodSandboxId:cbd8bbc0b830527874fdbef734642c050e7e6a62986ee8cdf383f82424b3b1c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727725167873622399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wgjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2646cb6-ecf8-4e44-9d48-b49eead7d727,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67,PodSandboxId:472730560a69cb865a7de097b81e5d7c46896bf3dfef03d491afa5c9add05b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727725156408359954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509234ffc60223733ef52b2009dbce73,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb,PodSandboxId:45990caa9ec749761565324cc3ffda13e0181f617a83701013fa0c2c91467ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727725156391153567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462c1efc125130690ce0abe7c0d6a433,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377,PodSandboxId:f599de907322667aeed83b2705fea682b338d49da5ee13de1790e02e7e4e8a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727725156395714900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c22ddcce59702bad76d277171c4f1a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e,PodSandboxId:329303fea433cc4c43cb1ec6a4a7d52fafbb483b77613fefca8466b49fcac7b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727725156374738044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aaf74d96d0249f06846b94c74ecc9cd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67144460-85ac-464c-8c04-6b4e25cca794 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.618369325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43508e84-7bcb-4c50-bad8-feb2c9f7d887 name=/runtime.v1.RuntimeService/Version
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.618473433Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43508e84-7bcb-4c50-bad8-feb2c9f7d887 name=/runtime.v1.RuntimeService/Version
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.619882299Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6168e92-655f-41ca-94ef-dd6bcb7ad056 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.620952835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727725810620926371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:524209,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6168e92-655f-41ca-94ef-dd6bcb7ad056 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.621630256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c1ad324-651d-4025-b4b8-bf3eedbf154e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.621710471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c1ad324-651d-4025-b4b8-bf3eedbf154e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.622730197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed0ef0c37aa10445b5b20c6e4e08f971d0959639883af771def3f3ee899e6770,PodSandboxId:cab0655c8c501e5fbd2f4fb43bc5478bf361374e0fdbaa9b0649058f8c84e917,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727725788970878126,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-2b406b11-e501-447a-83ed-ef44d83e41ee,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ffe27fbe-f98e-422c-8543-b0df39ee4c28,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fd88b9faaa7714b48a4b7fc924557af57f3f0f70c0d55b12cb6d594da63f54,PodSandboxId:b76e74399d169c5950192411eb82bd7ccf1f807abcd088292c23d936592c8bc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727725785573775036,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cea76bb6-9c73-43ae-8a4b-9e2ae12f5ae0,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe98dd70d0355aaeadd486e0b8190f9daf049f376ab06db1d663a2aa2a512c1,PodSandboxId:30e6f619b85c6a68275a9a25eb1670098d690434a8f7e2aeb4758203623ce28f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727725778685724938,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-2b406b11-e501-447a-83ed-ef44d83e41ee,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a6072416-e714-495a-9019-5c4cd9f37cbb,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef,PodSandboxId:2927b71f84ff3f76f3a52a1aecbd72a68cfa19e0cdca879f3210c117c839294f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727725251528262837,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-scvnm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 5e438281-5451-4290-8c50-14fb79a66185,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2d0f08874d9e73873481108ad4b7c2ace12dbf72ff01f34def4fc1e5cfff5d,PodSandboxId:688bd1bfa229439100a9354e8a964323ff03263a06e3c5df7e83b4a73875b57c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727725247049962052,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-9bfzl,io.kubernetes.pod.namespace: ingress-ngin
x,io.kubernetes.pod.uid: f16cd6ff-05a8-47e5-963e-ef20ce165eeb,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:46f33863b6d216c85078337b5eefc34ba3141590e24ec8b9dfbb21d10595b84e,PodSandboxId:3e88376f8e4f3c5da30623befddc798d3597e97f13199051087ad81a73199883,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fada
ef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226573713791,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cgdc6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 81717421-6023-4cfb-acff-733a7ea02838,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831ffd5c60190ad65b735f6a1c699bb486f24c54379a56cc2a077aac0eb4c325,PodSandboxId:f002aa1c3285a2c33f423dfce6f5f97d16dbd6ad2adcb4888ad0d38d814ac293,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0
e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226432061841,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qv7n8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8162826e-db14-46b9-93f2-456169ccfb0d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606aacc25dc7b552db41badd2b01633126455a44368d823c8566b200abc0836b,PodSandboxId:98be2d636c186daf56eee2476248b6f4ffa19bdc3cbc77538eaad093a82c00e0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/ranc
her/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727725218077557047,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-cb4rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7df0eb14-f1ff-4d87-a485-efb580a3304b,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013,PodSandboxId:5d866c50845926549f01df87a9908307213fc5caa20603d75bdd4c898c23d1c3,Metadata:&ContainerMetadata{Name:met
rics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727725209633050557,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cdn25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b344652c-decb-4b68-9eb4-dd034008cf98,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbbc7c85eaec24fb4d15cf79a7766331aec956
ce9799202bccf45c4baadd4428,PodSandboxId:a13890b89820ab8ea08ffa95c7aac76bf27d1c8594dd5f3b2d6bc4ea6ae958f9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727725187794389049,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1217c30-4e9c-43fa-a3f6-0a640781c5f8,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab,PodSandboxId:44e738ed93b01a10a8ff2fe7b585def59079d101143e4555486329cd7fcc73b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727725171524308003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf253e6d-52dd-4bbf-a505-61269b1bb4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec,PodSandboxId:7264dffbc56c756580b1699b46a98d026060043f7ded85528176c4468f3e54d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727725169673865152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2sl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ef3332d-3ee7-4d76-bbef-2dfc99673515,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a,PodSandboxId:cbd8bbc0b830527874fdbef734642c050e7e6a62986ee8cdf383f82424b3b1c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727725167873622399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wgjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2646cb6-ecf8-4e44-9d48-b49eead7d727,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67,PodSandboxId:472730560a69cb865a7de097b81e5d7c46896bf3dfef03d491afa5c9add05b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727725156408359954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509234ffc60223733ef52b2009dbce73,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb,PodSandboxId:45990caa9ec749761565324cc3ffda13e0181f617a83701013fa0c2c91467ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727725156391153567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462c1efc125130690ce0abe7c0d6a433,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377,PodSandboxId:f599de907322667aeed83b2705fea682b338d49da5ee13de1790e02e7e4e8a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727725156395714900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c22ddcce59702bad76d277171c4f1a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e,PodSandboxId:329303fea433cc4c43cb1ec6a4a7d52fafbb483b77613fefca8466b49fcac7b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727725156374738044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aaf74d96d0249f06846b94c74ecc9cd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c1ad324-651d-4025-b4b8-bf3eedbf154e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.664957075Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9b828df-90b7-442c-a35d-fce3ec2ef916 name=/runtime.v1.RuntimeService/Version
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.665052517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9b828df-90b7-442c-a35d-fce3ec2ef916 name=/runtime.v1.RuntimeService/Version
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.666177306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37b097cb-34ad-4411-a596-ec85cf6f5aae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.667327774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727725810667302378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:524209,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37b097cb-34ad-4411-a596-ec85cf6f5aae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.667827265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=039b003e-893a-4b9c-bf4f-8096291853b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.667931738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=039b003e-893a-4b9c-bf4f-8096291853b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:50:10 addons-857381 crio[658]: time="2024-09-30 19:50:10.668293118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed0ef0c37aa10445b5b20c6e4e08f971d0959639883af771def3f3ee899e6770,PodSandboxId:cab0655c8c501e5fbd2f4fb43bc5478bf361374e0fdbaa9b0649058f8c84e917,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727725788970878126,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-2b406b11-e501-447a-83ed-ef44d83e41ee,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ffe27fbe-f98e-422c-8543-b0df39ee4c28,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fd88b9faaa7714b48a4b7fc924557af57f3f0f70c0d55b12cb6d594da63f54,PodSandboxId:b76e74399d169c5950192411eb82bd7ccf1f807abcd088292c23d936592c8bc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727725785573775036,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cea76bb6-9c73-43ae-8a4b-9e2ae12f5ae0,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe98dd70d0355aaeadd486e0b8190f9daf049f376ab06db1d663a2aa2a512c1,PodSandboxId:30e6f619b85c6a68275a9a25eb1670098d690434a8f7e2aeb4758203623ce28f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727725778685724938,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-2b406b11-e501-447a-83ed-ef44d83e41ee,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a6072416-e714-495a-9019-5c4cd9f37cbb,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef,PodSandboxId:2927b71f84ff3f76f3a52a1aecbd72a68cfa19e0cdca879f3210c117c839294f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727725251528262837,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-scvnm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 5e438281-5451-4290-8c50-14fb79a66185,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2d0f08874d9e73873481108ad4b7c2ace12dbf72ff01f34def4fc1e5cfff5d,PodSandboxId:688bd1bfa229439100a9354e8a964323ff03263a06e3c5df7e83b4a73875b57c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727725247049962052,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-9bfzl,io.kubernetes.pod.namespace: ingress-ngin
x,io.kubernetes.pod.uid: f16cd6ff-05a8-47e5-963e-ef20ce165eeb,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:46f33863b6d216c85078337b5eefc34ba3141590e24ec8b9dfbb21d10595b84e,PodSandboxId:3e88376f8e4f3c5da30623befddc798d3597e97f13199051087ad81a73199883,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fada
ef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226573713791,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cgdc6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 81717421-6023-4cfb-acff-733a7ea02838,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831ffd5c60190ad65b735f6a1c699bb486f24c54379a56cc2a077aac0eb4c325,PodSandboxId:f002aa1c3285a2c33f423dfce6f5f97d16dbd6ad2adcb4888ad0d38d814ac293,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0
e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226432061841,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qv7n8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8162826e-db14-46b9-93f2-456169ccfb0d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606aacc25dc7b552db41badd2b01633126455a44368d823c8566b200abc0836b,PodSandboxId:98be2d636c186daf56eee2476248b6f4ffa19bdc3cbc77538eaad093a82c00e0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/ranc
her/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727725218077557047,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-cb4rt,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7df0eb14-f1ff-4d87-a485-efb580a3304b,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013,PodSandboxId:5d866c50845926549f01df87a9908307213fc5caa20603d75bdd4c898c23d1c3,Metadata:&ContainerMetadata{Name:met
rics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727725209633050557,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cdn25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b344652c-decb-4b68-9eb4-dd034008cf98,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbbc7c85eaec24fb4d15cf79a7766331aec956
ce9799202bccf45c4baadd4428,PodSandboxId:a13890b89820ab8ea08ffa95c7aac76bf27d1c8594dd5f3b2d6bc4ea6ae958f9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727725187794389049,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1217c30-4e9c-43fa-a3f6-0a640781c5f8,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab,PodSandboxId:44e738ed93b01a10a8ff2fe7b585def59079d101143e4555486329cd7fcc73b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727725171524308003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf253e6d-52dd-4bbf-a505-61269b1bb4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec,PodSandboxId:7264dffbc56c756580b1699b46a98d026060043f7ded85528176c4468f3e54d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727725169673865152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2sl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ef3332d-3ee7-4d76-bbef-2dfc99673515,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a,PodSandboxId:cbd8bbc0b830527874fdbef734642c050e7e6a62986ee8cdf383f82424b3b1c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727725167873622399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wgjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2646cb6-ecf8-4e44-9d48-b49eead7d727,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67,PodSandboxId:472730560a69cb865a7de097b81e5d7c46896bf3dfef03d491afa5c9add05b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727725156408359954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509234ffc60223733ef52b2009dbce73,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb,PodSandboxId:45990caa9ec749761565324cc3ffda13e0181f617a83701013fa0c2c91467ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727725156391153567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462c1efc125130690ce0abe7c0d6a433,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377,PodSandboxId:f599de907322667aeed83b2705fea682b338d49da5ee13de1790e02e7e4e8a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727725156395714900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c22ddcce59702bad76d277171c4f1a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e,PodSandboxId:329303fea433cc4c43cb1ec6a4a7d52fafbb483b77613fefca8466b49fcac7b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727725156374738044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aaf74d96d0249f06846b94c74ecc9cd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=039b003e-893a-4b9c-bf4f-8096291853b1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ed0ef0c37aa10       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             21 seconds ago      Exited              helper-pod                0                   cab0655c8c501       helper-pod-delete-pvc-2b406b11-e501-447a-83ed-ef44d83e41ee
	27fd88b9faaa7       docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f                            25 seconds ago      Exited              busybox                   0                   b76e74399d169       test-local-path
	abe98dd70d035       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                            32 seconds ago      Exited              helper-pod                0                   30e6f619b85c6       helper-pod-create-pvc-2b406b11-e501-447a-83ed-ef44d83e41ee
	0a550a25e9f7b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                  0                   2927b71f84ff3       gcp-auth-89d5ffd79-scvnm
	6a2d0f08874d9       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                0                   688bd1bfa2294       ingress-nginx-controller-bc57996ff-9bfzl
	46f33863b6d21       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              patch                     0                   3e88376f8e4f3       ingress-nginx-admission-patch-cgdc6
	831ffd5c60190       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              create                    0                   f002aa1c3285a       ingress-nginx-admission-create-qv7n8
	606aacc25dc7b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             9 minutes ago       Running             local-path-provisioner    0                   98be2d636c186       local-path-provisioner-86d989889c-cb4rt
	c6b2eb356f364       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago      Running             metrics-server            0                   5d866c5084592       metrics-server-84c5f94fbc-cdn25
	fbbc7c85eaec2       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns      0                   a13890b89820a       kube-ingress-dns-minikube
	34fdddbc2729c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   44e738ed93b01       storage-provisioner
	8a2f669f59ff8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago      Running             coredns                   0                   7264dffbc56c7       coredns-7c65d6cfc9-v2sl5
	cd4a5712da231       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             10 minutes ago      Running             kube-proxy                0                   cbd8bbc0b8305       kube-proxy-wgjdg
	611b55895a7c3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             10 minutes ago      Running             etcd                      0                   472730560a69c       etcd-addons-857381
	f054c208a5bd0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             10 minutes ago      Running             kube-controller-manager   0                   f599de9073226       kube-controller-manager-addons-857381
	0f613c2d90480       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             10 minutes ago      Running             kube-scheduler            0                   45990caa9ec74       kube-scheduler-addons-857381
	e6ba6b23751a3       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             10 minutes ago      Running             kube-apiserver            0                   329303fea433c       kube-apiserver-addons-857381
	
	
	==> coredns [8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec] <==
	[INFO] 127.0.0.1:57266 - 46113 "HINFO IN 4563711597832070733.7464152516972830378. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012863189s
	[INFO] 10.244.0.7:41266 - 20553 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000327187s
	[INFO] 10.244.0.7:41266 - 47123 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.0007627s
	[INFO] 10.244.0.7:41266 - 44256 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000120493s
	[INFO] 10.244.0.7:41266 - 8839 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000082266s
	[INFO] 10.244.0.7:41266 - 45651 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000085479s
	[INFO] 10.244.0.7:41266 - 55882 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000231828s
	[INFO] 10.244.0.7:41266 - 16528 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000127235s
	[INFO] 10.244.0.7:41266 - 22884 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000079062s
	[INFO] 10.244.0.7:58608 - 46632 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093178s
	[INFO] 10.244.0.7:58608 - 46894 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000048081s
	[INFO] 10.244.0.7:53470 - 3911 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066274s
	[INFO] 10.244.0.7:53470 - 3656 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000054504s
	[INFO] 10.244.0.7:34130 - 26559 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059796s
	[INFO] 10.244.0.7:34130 - 26354 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043427s
	[INFO] 10.244.0.7:40637 - 48484 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000044485s
	[INFO] 10.244.0.7:40637 - 48313 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050997s
	[INFO] 10.244.0.21:43040 - 43581 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00046625s
	[INFO] 10.244.0.21:55023 - 19308 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000074371s
	[INFO] 10.244.0.21:45685 - 26448 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122686s
	[INFO] 10.244.0.21:43520 - 19830 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000076449s
	[INFO] 10.244.0.21:37619 - 36517 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132562s
	[INFO] 10.244.0.21:43029 - 472 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000279272s
	[INFO] 10.244.0.21:58516 - 17196 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002205188s
	[INFO] 10.244.0.21:42990 - 49732 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002642341s
	
	
	==> describe nodes <==
	Name:               addons-857381
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-857381
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=addons-857381
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T19_39_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-857381
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 19:39:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-857381
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 19:50:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 19:49:54 +0000   Mon, 30 Sep 2024 19:39:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 19:49:54 +0000   Mon, 30 Sep 2024 19:39:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 19:49:54 +0000   Mon, 30 Sep 2024 19:39:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 19:49:54 +0000   Mon, 30 Sep 2024 19:39:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    addons-857381
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 25d9982bd002458384094f49961bbdf8
	  System UUID:                25d9982b-d002-4583-8409-4f49961bbdf8
	  Boot ID:                    b5f01af6-3227-4822-ba41-5ad95d8a7eaf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  gcp-auth                    gcp-auth-89d5ffd79-scvnm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-9bfzl    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-v2sl5                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-857381                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-857381                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-857381       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-wgjdg                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-857381                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-cdn25             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          local-path-provisioner-86d989889c-cb4rt     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-857381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-857381 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-857381 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-857381 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-857381 event: Registered Node addons-857381 in Controller
	
	
	==> dmesg <==
	[  +5.982704] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.080127] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.784523] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.986942] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.027017] kauditd_printk_skb: 123 callbacks suppressed
	[  +5.125991] kauditd_printk_skb: 110 callbacks suppressed
	[ +10.689942] kauditd_printk_skb: 62 callbacks suppressed
	[Sep30 19:40] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.674801] kauditd_printk_skb: 24 callbacks suppressed
	[ +12.773296] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.640929] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.302122] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.224814] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.475445] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.472390] kauditd_printk_skb: 6 callbacks suppressed
	[Sep30 19:41] kauditd_printk_skb: 6 callbacks suppressed
	[Sep30 19:49] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.016133] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.573920] kauditd_printk_skb: 13 callbacks suppressed
	[ +17.576553] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.137626] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.049333] kauditd_printk_skb: 15 callbacks suppressed
	[  +9.481275] kauditd_printk_skb: 64 callbacks suppressed
	[Sep30 19:50] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.792200] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67] <==
	{"level":"info","ts":"2024-09-30T19:40:33.282237Z","caller":"traceutil/trace.go:171","msg":"trace[1440135749] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1021; }","duration":"203.229379ms","start":"2024-09-30T19:40:33.079003Z","end":"2024-09-30T19:40:33.282232Z","steps":["trace[1440135749] 'agreement among raft nodes before linearized reading'  (duration: 203.162702ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T19:40:41.965188Z","caller":"traceutil/trace.go:171","msg":"trace[63557113] transaction","detail":"{read_only:false; response_revision:1075; number_of_response:1; }","duration":"103.358472ms","start":"2024-09-30T19:40:41.861805Z","end":"2024-09-30T19:40:41.965164Z","steps":["trace[63557113] 'process raft request'  (duration: 103.177417ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.024163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.653381ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.024367Z","caller":"traceutil/trace.go:171","msg":"trace[665645630] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1080; }","duration":"268.162297ms","start":"2024-09-30T19:40:44.756192Z","end":"2024-09-30T19:40:45.024355Z","steps":["trace[665645630] 'range keys from in-memory index tree'  (duration: 267.637437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.024639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.464576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.024814Z","caller":"traceutil/trace.go:171","msg":"trace[1197247651] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1080; }","duration":"228.698131ms","start":"2024-09-30T19:40:44.795971Z","end":"2024-09-30T19:40:45.024669Z","steps":["trace[1197247651] 'range keys from in-memory index tree'  (duration: 228.42242ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.024764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"509.83424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.024932Z","caller":"traceutil/trace.go:171","msg":"trace[1982350029] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1080; }","duration":"510.003594ms","start":"2024-09-30T19:40:44.514921Z","end":"2024-09-30T19:40:45.024925Z","steps":["trace[1982350029] 'count revisions from in-memory index tree'  (duration: 509.784533ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.024960Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T19:40:44.514884Z","time spent":"510.067329ms","remote":"127.0.0.1:40802","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true "}
	{"level":"warn","ts":"2024-09-30T19:40:45.025722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.967655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.025980Z","caller":"traceutil/trace.go:171","msg":"trace[1921436978] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:1080; }","duration":"103.205459ms","start":"2024-09-30T19:40:44.922740Z","end":"2024-09-30T19:40:45.025946Z","steps":["trace[1921436978] 'count revisions from in-memory index tree'  (duration: 102.824591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.027664Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"503.881417ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.027743Z","caller":"traceutil/trace.go:171","msg":"trace[1637850638] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1080; }","duration":"503.963128ms","start":"2024-09-30T19:40:44.523772Z","end":"2024-09-30T19:40:45.027735Z","steps":["trace[1637850638] 'range keys from in-memory index tree'  (duration: 503.748159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.027813Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T19:40:44.523734Z","time spent":"504.023771ms","remote":"127.0.0.1:40756","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-30T19:49:17.549046Z","caller":"traceutil/trace.go:171","msg":"trace[477247537] linearizableReadLoop","detail":"{readStateIndex:2110; appliedIndex:2109; }","duration":"332.343416ms","start":"2024-09-30T19:49:17.216678Z","end":"2024-09-30T19:49:17.549021Z","steps":["trace[477247537] 'read index received'  (duration: 332.162445ms)","trace[477247537] 'applied index is now lower than readState.Index'  (duration: 180.324µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T19:49:17.549230Z","caller":"traceutil/trace.go:171","msg":"trace[487588167] transaction","detail":"{read_only:false; response_revision:1964; number_of_response:1; }","duration":"416.883354ms","start":"2024-09-30T19:49:17.132337Z","end":"2024-09-30T19:49:17.549220Z","steps":["trace[487588167] 'process raft request'  (duration: 416.547999ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:49:17.549391Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T19:49:17.132321Z","time spent":"416.931927ms","remote":"127.0.0.1:40530","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":25,"response count":0,"response size":39,"request content":"compare:<key:\"compact_rev_key\" version:1 > success:<request_put:<key:\"compact_rev_key\" value_size:4 >> failure:<request_range:<key:\"compact_rev_key\" > >"}
	{"level":"warn","ts":"2024-09-30T19:49:17.549718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.534401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:49:17.550030Z","caller":"traceutil/trace.go:171","msg":"trace[1640880640] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1964; }","duration":"189.856846ms","start":"2024-09-30T19:49:17.360156Z","end":"2024-09-30T19:49:17.550013Z","steps":["trace[1640880640] 'agreement among raft nodes before linearized reading'  (duration: 189.426494ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:49:17.549806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.130902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/headlamp\" ","response":"range_response_count:1 size:596"}
	{"level":"info","ts":"2024-09-30T19:49:17.550375Z","caller":"traceutil/trace.go:171","msg":"trace[30748080] range","detail":"{range_begin:/registry/namespaces/headlamp; range_end:; response_count:1; response_revision:1964; }","duration":"333.699236ms","start":"2024-09-30T19:49:17.216666Z","end":"2024-09-30T19:49:17.550366Z","steps":["trace[30748080] 'agreement among raft nodes before linearized reading'  (duration: 333.066743ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:49:17.550527Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T19:49:17.216625Z","time spent":"333.888235ms","remote":"127.0.0.1:40674","response type":"/etcdserverpb.KV/Range","request count":0,"request size":31,"response count":1,"response size":619,"request content":"key:\"/registry/namespaces/headlamp\" "}
	{"level":"info","ts":"2024-09-30T19:49:17.560569Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1467}
	{"level":"info","ts":"2024-09-30T19:49:17.674282Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1467,"took":"113.151678ms","hash":2336021825,"current-db-size-bytes":6635520,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3395584,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-30T19:49:17.674799Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2336021825,"revision":1467,"compact-revision":-1}
	
	
	==> gcp-auth [0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef] <==
	2024/09/30 19:40:51 GCP Auth Webhook started!
	2024/09/30 19:40:52 Ready to marshal response ...
	2024/09/30 19:40:52 Ready to write response ...
	2024/09/30 19:40:55 Ready to marshal response ...
	2024/09/30 19:40:55 Ready to write response ...
	2024/09/30 19:40:55 Ready to marshal response ...
	2024/09/30 19:40:55 Ready to write response ...
	2024/09/30 19:48:58 Ready to marshal response ...
	2024/09/30 19:48:58 Ready to write response ...
	2024/09/30 19:48:58 Ready to marshal response ...
	2024/09/30 19:48:58 Ready to write response ...
	2024/09/30 19:48:58 Ready to marshal response ...
	2024/09/30 19:48:58 Ready to write response ...
	2024/09/30 19:49:08 Ready to marshal response ...
	2024/09/30 19:49:08 Ready to write response ...
	2024/09/30 19:49:10 Ready to marshal response ...
	2024/09/30 19:49:10 Ready to write response ...
	2024/09/30 19:49:35 Ready to marshal response ...
	2024/09/30 19:49:35 Ready to write response ...
	2024/09/30 19:49:35 Ready to marshal response ...
	2024/09/30 19:49:35 Ready to write response ...
	2024/09/30 19:49:38 Ready to marshal response ...
	2024/09/30 19:49:38 Ready to write response ...
	2024/09/30 19:49:48 Ready to marshal response ...
	2024/09/30 19:49:48 Ready to write response ...
	
	
	==> kernel <==
	 19:50:11 up 11 min,  0 users,  load average: 1.60, 0.86, 0.54
	Linux addons-857381 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e] <==
	I0930 19:49:55.990305       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 19:49:55.990341       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 19:49:56.013766       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 19:49:56.013866       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 19:49:56.020740       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 19:49:56.020794       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 19:49:56.065045       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 19:49:56.065528       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0930 19:49:57.015197       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0930 19:49:57.064994       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0930 19:49:57.167986       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0930 19:50:00.209878       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:01.218394       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:02.236227       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:03.254865       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:04.268899       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:04.476888       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:05.276046       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:06.287284       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:07.295293       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:08.316036       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0930 19:50:08.612418       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0930 19:50:09.323889       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	W0930 19:50:09.653652       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0930 19:50:10.334217       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377] <==
	E0930 19:49:57.067043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0930 19:49:57.169628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:49:57.893645       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:49:57.893706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:49:58.335908       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:49:58.336012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:49:58.391471       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:49:58.391594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:49:59.977680       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:49:59.977734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:50:00.040770       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:50:00.040885       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:50:00.570716       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:50:00.570873       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 19:50:01.955979       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5b584cc74" duration="5.809µs"
	W0930 19:50:04.014098       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:50:04.014152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:50:04.607162       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:50:04.607198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:50:06.545319       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:50:06.545379       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 19:50:09.529763       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.079µs"
	E0930 19:50:09.655934       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:50:10.946530       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:50:10.946608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 19:39:29.990587       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 19:39:30.058676       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	E0930 19:39:30.058750       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 19:39:30.362730       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 19:39:30.362795       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 19:39:30.362820       1 server_linux.go:169] "Using iptables Proxier"
	I0930 19:39:30.416095       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 19:39:30.416411       1 server.go:483] "Version info" version="v1.31.1"
	I0930 19:39:30.416479       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 19:39:30.470892       1 config.go:199] "Starting service config controller"
	I0930 19:39:30.470932       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 19:39:30.470961       1 config.go:105] "Starting endpoint slice config controller"
	I0930 19:39:30.470965       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 19:39:30.471620       1 config.go:328] "Starting node config controller"
	I0930 19:39:30.471641       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 19:39:30.571571       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 19:39:30.571587       1 shared_informer.go:320] Caches are synced for service config
	I0930 19:39:30.573064       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb] <==
	E0930 19:39:18.783718       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0930 19:39:18.783738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:18.783806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 19:39:18.783818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.639835       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 19:39:19.639943       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 19:39:19.654740       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 19:39:19.654792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.667324       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 19:39:19.667422       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.774980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 19:39:19.775022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.818960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 19:39:19.819059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.876197       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 19:39:19.876273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.888046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 19:39:19.888095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.898349       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 19:39:19.898413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.915746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 19:39:19.915953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:20.008659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0930 19:39:20.008707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0930 19:39:21.870985       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 19:50:09 addons-857381 kubelet[1195]: I0930 19:50:09.143686    1195 scope.go:117] "RemoveContainer" containerID="f364d46d4f5dca423bc70c518296b04444c89aa63027418846fb18dfdc5fe139"
	Sep 30 19:50:09 addons-857381 kubelet[1195]: I0930 19:50:09.188628    1195 scope.go:117] "RemoveContainer" containerID="f364d46d4f5dca423bc70c518296b04444c89aa63027418846fb18dfdc5fe139"
	Sep 30 19:50:09 addons-857381 kubelet[1195]: E0930 19:50:09.189704    1195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f364d46d4f5dca423bc70c518296b04444c89aa63027418846fb18dfdc5fe139\": container with ID starting with f364d46d4f5dca423bc70c518296b04444c89aa63027418846fb18dfdc5fe139 not found: ID does not exist" containerID="f364d46d4f5dca423bc70c518296b04444c89aa63027418846fb18dfdc5fe139"
	Sep 30 19:50:09 addons-857381 kubelet[1195]: I0930 19:50:09.189746    1195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f364d46d4f5dca423bc70c518296b04444c89aa63027418846fb18dfdc5fe139"} err="failed to get container status \"f364d46d4f5dca423bc70c518296b04444c89aa63027418846fb18dfdc5fe139\": rpc error: code = NotFound desc = could not find container \"f364d46d4f5dca423bc70c518296b04444c89aa63027418846fb18dfdc5fe139\": container with ID starting with f364d46d4f5dca423bc70c518296b04444c89aa63027418846fb18dfdc5fe139 not found: ID does not exist"
	Sep 30 19:50:09 addons-857381 kubelet[1195]: I0930 19:50:09.234718    1195 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhtk5\" (UniqueName: \"kubernetes.io/projected/437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24-kube-api-access-rhtk5\") pod \"437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24\" (UID: \"437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24\") "
	Sep 30 19:50:09 addons-857381 kubelet[1195]: I0930 19:50:09.234840    1195 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24-gcp-creds\") pod \"437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24\" (UID: \"437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24\") "
	Sep 30 19:50:09 addons-857381 kubelet[1195]: I0930 19:50:09.234950    1195 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24" (UID: "437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 30 19:50:09 addons-857381 kubelet[1195]: I0930 19:50:09.237515    1195 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24-kube-api-access-rhtk5" (OuterVolumeSpecName: "kube-api-access-rhtk5") pod "437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24" (UID: "437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24"). InnerVolumeSpecName "kube-api-access-rhtk5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 19:50:09 addons-857381 kubelet[1195]: I0930 19:50:09.335904    1195 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rhtk5\" (UniqueName: \"kubernetes.io/projected/437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24-kube-api-access-rhtk5\") on node \"addons-857381\" DevicePath \"\""
	Sep 30 19:50:09 addons-857381 kubelet[1195]: I0930 19:50:09.335953    1195 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/437cdb41-c6fc-4a7b-9e4a-4fd3b193cd24-gcp-creds\") on node \"addons-857381\" DevicePath \"\""
	Sep 30 19:50:09 addons-857381 kubelet[1195]: I0930 19:50:09.407051    1195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a81ace8b-9df8-4d4c-971d-e7fdaf31b9fe" path="/var/lib/kubelet/pods/a81ace8b-9df8-4d4c-971d-e7fdaf31b9fe/volumes"
	Sep 30 19:50:10 addons-857381 kubelet[1195]: I0930 19:50:10.041647    1195 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kmm4\" (UniqueName: \"kubernetes.io/projected/e66e6fb9-7274-4a0b-b787-c64abc8ffe04-kube-api-access-7kmm4\") pod \"e66e6fb9-7274-4a0b-b787-c64abc8ffe04\" (UID: \"e66e6fb9-7274-4a0b-b787-c64abc8ffe04\") "
	Sep 30 19:50:10 addons-857381 kubelet[1195]: I0930 19:50:10.041694    1195 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtj9z\" (UniqueName: \"kubernetes.io/projected/cf0e9fcc-d5e3-4dd8-8337-406b07ab9495-kube-api-access-rtj9z\") pod \"cf0e9fcc-d5e3-4dd8-8337-406b07ab9495\" (UID: \"cf0e9fcc-d5e3-4dd8-8337-406b07ab9495\") "
	Sep 30 19:50:10 addons-857381 kubelet[1195]: I0930 19:50:10.043931    1195 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66e6fb9-7274-4a0b-b787-c64abc8ffe04-kube-api-access-7kmm4" (OuterVolumeSpecName: "kube-api-access-7kmm4") pod "e66e6fb9-7274-4a0b-b787-c64abc8ffe04" (UID: "e66e6fb9-7274-4a0b-b787-c64abc8ffe04"). InnerVolumeSpecName "kube-api-access-7kmm4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 19:50:10 addons-857381 kubelet[1195]: I0930 19:50:10.044857    1195 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0e9fcc-d5e3-4dd8-8337-406b07ab9495-kube-api-access-rtj9z" (OuterVolumeSpecName: "kube-api-access-rtj9z") pod "cf0e9fcc-d5e3-4dd8-8337-406b07ab9495" (UID: "cf0e9fcc-d5e3-4dd8-8337-406b07ab9495"). InnerVolumeSpecName "kube-api-access-rtj9z". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 19:50:10 addons-857381 kubelet[1195]: I0930 19:50:10.143177    1195 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7kmm4\" (UniqueName: \"kubernetes.io/projected/e66e6fb9-7274-4a0b-b787-c64abc8ffe04-kube-api-access-7kmm4\") on node \"addons-857381\" DevicePath \"\""
	Sep 30 19:50:10 addons-857381 kubelet[1195]: I0930 19:50:10.143220    1195 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rtj9z\" (UniqueName: \"kubernetes.io/projected/cf0e9fcc-d5e3-4dd8-8337-406b07ab9495-kube-api-access-rtj9z\") on node \"addons-857381\" DevicePath \"\""
	Sep 30 19:50:10 addons-857381 kubelet[1195]: I0930 19:50:10.150917    1195 scope.go:117] "RemoveContainer" containerID="4957ef933fa0fe3f2cccf6b3a23fb59ed19c84147047c0c29199802f396e364c"
	Sep 30 19:50:10 addons-857381 kubelet[1195]: I0930 19:50:10.201200    1195 scope.go:117] "RemoveContainer" containerID="4957ef933fa0fe3f2cccf6b3a23fb59ed19c84147047c0c29199802f396e364c"
	Sep 30 19:50:10 addons-857381 kubelet[1195]: E0930 19:50:10.205365    1195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4957ef933fa0fe3f2cccf6b3a23fb59ed19c84147047c0c29199802f396e364c\": container with ID starting with 4957ef933fa0fe3f2cccf6b3a23fb59ed19c84147047c0c29199802f396e364c not found: ID does not exist" containerID="4957ef933fa0fe3f2cccf6b3a23fb59ed19c84147047c0c29199802f396e364c"
	Sep 30 19:50:10 addons-857381 kubelet[1195]: I0930 19:50:10.205410    1195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4957ef933fa0fe3f2cccf6b3a23fb59ed19c84147047c0c29199802f396e364c"} err="failed to get container status \"4957ef933fa0fe3f2cccf6b3a23fb59ed19c84147047c0c29199802f396e364c\": rpc error: code = NotFound desc = could not find container \"4957ef933fa0fe3f2cccf6b3a23fb59ed19c84147047c0c29199802f396e364c\": container with ID starting with 4957ef933fa0fe3f2cccf6b3a23fb59ed19c84147047c0c29199802f396e364c not found: ID does not exist"
	Sep 30 19:50:10 addons-857381 kubelet[1195]: I0930 19:50:10.205434    1195 scope.go:117] "RemoveContainer" containerID="e43e06c6fe05183d492b43f45cb0b0e53bc9f9ed66ed2809220b905b63bdc1e3"
	Sep 30 19:50:10 addons-857381 kubelet[1195]: I0930 19:50:10.228605    1195 scope.go:117] "RemoveContainer" containerID="e43e06c6fe05183d492b43f45cb0b0e53bc9f9ed66ed2809220b905b63bdc1e3"
	Sep 30 19:50:10 addons-857381 kubelet[1195]: E0930 19:50:10.229751    1195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e43e06c6fe05183d492b43f45cb0b0e53bc9f9ed66ed2809220b905b63bdc1e3\": container with ID starting with e43e06c6fe05183d492b43f45cb0b0e53bc9f9ed66ed2809220b905b63bdc1e3 not found: ID does not exist" containerID="e43e06c6fe05183d492b43f45cb0b0e53bc9f9ed66ed2809220b905b63bdc1e3"
	Sep 30 19:50:10 addons-857381 kubelet[1195]: I0930 19:50:10.229800    1195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e43e06c6fe05183d492b43f45cb0b0e53bc9f9ed66ed2809220b905b63bdc1e3"} err="failed to get container status \"e43e06c6fe05183d492b43f45cb0b0e53bc9f9ed66ed2809220b905b63bdc1e3\": rpc error: code = NotFound desc = could not find container \"e43e06c6fe05183d492b43f45cb0b0e53bc9f9ed66ed2809220b905b63bdc1e3\": container with ID starting with e43e06c6fe05183d492b43f45cb0b0e53bc9f9ed66ed2809220b905b63bdc1e3 not found: ID does not exist"
	
	
	==> storage-provisioner [34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab] <==
	I0930 19:39:33.155826       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 19:39:33.685414       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 19:39:33.685583       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 19:39:33.816356       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 19:39:33.824546       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-857381_08dcb125-dcae-41ac-b31f-3f836116afa4!
	I0930 19:39:33.844765       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c2244a99-76a6-4c70-8326-d7436fd22acb", APIVersion:"v1", ResourceVersion:"651", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-857381_08dcb125-dcae-41ac-b31f-3f836116afa4 became leader
	I0930 19:39:34.127903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-857381_08dcb125-dcae-41ac-b31f-3f836116afa4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-857381 -n addons-857381
helpers_test.go:261: (dbg) Run:  kubectl --context addons-857381 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-qv7n8 ingress-nginx-admission-patch-cgdc6
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-857381 describe pod busybox ingress-nginx-admission-create-qv7n8 ingress-nginx-admission-patch-cgdc6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-857381 describe pod busybox ingress-nginx-admission-create-qv7n8 ingress-nginx-admission-patch-cgdc6: exit status 1 (73.942335ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-857381/192.168.39.16
	Start Time:       Mon, 30 Sep 2024 19:40:55 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5fk2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k5fk2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m16s                  default-scheduler  Successfully assigned default/busybox to addons-857381
	  Normal   Pulling    7m47s (x4 over 9m16s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m47s (x4 over 9m16s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m47s (x4 over 9m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m32s (x6 over 9m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m6s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qv7n8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cgdc6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-857381 describe pod busybox ingress-nginx-admission-create-qv7n8 ingress-nginx-admission-patch-cgdc6: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.04s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (151.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-857381 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-857381 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-857381 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b659e53f-9c5e-499b-b386-a5be26a79083] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b659e53f-9c5e-499b-b386-a5be26a79083] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003933031s
I0930 19:50:21.314493   14875 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-857381 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.05942969s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-857381 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.16
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-857381 addons disable ingress --alsologtostderr -v=1: (7.783757198s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-857381 -n addons-857381
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-857381 logs -n 25: (1.232777658s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| delete  | -p download-only-153563                                                                     | download-only-153563 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| delete  | -p download-only-816611                                                                     | download-only-816611 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| delete  | -p download-only-153563                                                                     | download-only-153563 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-728092 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC |                     |
	|         | binary-mirror-728092                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33837                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-728092                                                                     | binary-mirror-728092 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| addons  | disable dashboard -p                                                                        | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC |                     |
	|         | addons-857381                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC |                     |
	|         | addons-857381                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-857381 --wait=true                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:40 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:48 UTC | 30 Sep 24 19:48 UTC |
	|         | -p addons-857381                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | -p addons-857381                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-857381 ssh cat                                                                       | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | /opt/local-path-provisioner/pvc-2b406b11-e501-447a-83ed-ef44d83e41ee_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-857381 addons                                                                        | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-857381 addons                                                                        | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC | 30 Sep 24 19:50 UTC |
	|         | addons-857381                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC | 30 Sep 24 19:50 UTC |
	|         | addons-857381                                                                               |                      |         |         |                     |                     |
	| ip      | addons-857381 ip                                                                            | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC | 30 Sep 24 19:50 UTC |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC | 30 Sep 24 19:50 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-857381 ssh curl -s                                                                   | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-857381 ip                                                                            | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:52 UTC | 30 Sep 24 19:52 UTC |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:52 UTC | 30 Sep 24 19:52 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:52 UTC | 30 Sep 24 19:52 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 19:38:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 19:38:39.043134   15584 out.go:345] Setting OutFile to fd 1 ...
	I0930 19:38:39.043248   15584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:38:39.043257   15584 out.go:358] Setting ErrFile to fd 2...
	I0930 19:38:39.043261   15584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:38:39.043448   15584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 19:38:39.044075   15584 out.go:352] Setting JSON to false
	I0930 19:38:39.044883   15584 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1262,"bootTime":1727723857,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 19:38:39.044972   15584 start.go:139] virtualization: kvm guest
	I0930 19:38:39.046933   15584 out.go:177] * [addons-857381] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 19:38:39.048464   15584 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 19:38:39.048463   15584 notify.go:220] Checking for updates...
	I0930 19:38:39.051048   15584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 19:38:39.052632   15584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:38:39.054188   15584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:38:39.055634   15584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 19:38:39.056997   15584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 19:38:39.058475   15584 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 19:38:39.092364   15584 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 19:38:39.093649   15584 start.go:297] selected driver: kvm2
	I0930 19:38:39.093667   15584 start.go:901] validating driver "kvm2" against <nil>
	I0930 19:38:39.093686   15584 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 19:38:39.094418   15584 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:38:39.094502   15584 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 19:38:39.109335   15584 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 19:38:39.109387   15584 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 19:38:39.109649   15584 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 19:38:39.109675   15584 cni.go:84] Creating CNI manager for ""
	I0930 19:38:39.109717   15584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 19:38:39.109725   15584 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 19:38:39.109774   15584 start.go:340] cluster config:
	{Name:addons-857381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:38:39.109868   15584 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:38:39.111680   15584 out.go:177] * Starting "addons-857381" primary control-plane node in "addons-857381" cluster
	I0930 19:38:39.113118   15584 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:38:39.113163   15584 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 19:38:39.113173   15584 cache.go:56] Caching tarball of preloaded images
	I0930 19:38:39.113256   15584 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 19:38:39.113267   15584 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 19:38:39.113567   15584 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/config.json ...
	I0930 19:38:39.113591   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/config.json: {Name:mk4745e18a242e742e59d464f9dbb1a3421bf546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:38:39.113723   15584 start.go:360] acquireMachinesLock for addons-857381: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 19:38:39.113764   15584 start.go:364] duration metric: took 29.496µs to acquireMachinesLock for "addons-857381"
	I0930 19:38:39.113781   15584 start.go:93] Provisioning new machine with config: &{Name:addons-857381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 19:38:39.113835   15584 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 19:38:39.115274   15584 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0930 19:38:39.115408   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:38:39.115446   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:38:39.129988   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44615
	I0930 19:38:39.130433   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:38:39.130969   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:38:39.130987   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:38:39.131382   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:38:39.131591   15584 main.go:141] libmachine: (addons-857381) Calling .GetMachineName
	I0930 19:38:39.131741   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:38:39.131909   15584 start.go:159] libmachine.API.Create for "addons-857381" (driver="kvm2")
	I0930 19:38:39.131936   15584 client.go:168] LocalClient.Create starting
	I0930 19:38:39.131981   15584 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 19:38:39.238349   15584 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 19:38:39.522805   15584 main.go:141] libmachine: Running pre-create checks...
	I0930 19:38:39.522832   15584 main.go:141] libmachine: (addons-857381) Calling .PreCreateCheck
	I0930 19:38:39.523321   15584 main.go:141] libmachine: (addons-857381) Calling .GetConfigRaw
	I0930 19:38:39.523777   15584 main.go:141] libmachine: Creating machine...
	I0930 19:38:39.523791   15584 main.go:141] libmachine: (addons-857381) Calling .Create
	I0930 19:38:39.523944   15584 main.go:141] libmachine: (addons-857381) Creating KVM machine...
	I0930 19:38:39.525343   15584 main.go:141] libmachine: (addons-857381) DBG | found existing default KVM network
	I0930 19:38:39.526113   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:39.525972   15606 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0930 19:38:39.526140   15584 main.go:141] libmachine: (addons-857381) DBG | created network xml: 
	I0930 19:38:39.526149   15584 main.go:141] libmachine: (addons-857381) DBG | <network>
	I0930 19:38:39.526158   15584 main.go:141] libmachine: (addons-857381) DBG |   <name>mk-addons-857381</name>
	I0930 19:38:39.526174   15584 main.go:141] libmachine: (addons-857381) DBG |   <dns enable='no'/>
	I0930 19:38:39.526186   15584 main.go:141] libmachine: (addons-857381) DBG |   
	I0930 19:38:39.526201   15584 main.go:141] libmachine: (addons-857381) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0930 19:38:39.526214   15584 main.go:141] libmachine: (addons-857381) DBG |     <dhcp>
	I0930 19:38:39.526224   15584 main.go:141] libmachine: (addons-857381) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0930 19:38:39.526232   15584 main.go:141] libmachine: (addons-857381) DBG |     </dhcp>
	I0930 19:38:39.526241   15584 main.go:141] libmachine: (addons-857381) DBG |   </ip>
	I0930 19:38:39.526248   15584 main.go:141] libmachine: (addons-857381) DBG |   
	I0930 19:38:39.526254   15584 main.go:141] libmachine: (addons-857381) DBG | </network>
	I0930 19:38:39.526262   15584 main.go:141] libmachine: (addons-857381) DBG | 
	I0930 19:38:39.531685   15584 main.go:141] libmachine: (addons-857381) DBG | trying to create private KVM network mk-addons-857381 192.168.39.0/24...
	I0930 19:38:39.600904   15584 main.go:141] libmachine: (addons-857381) DBG | private KVM network mk-addons-857381 192.168.39.0/24 created
	I0930 19:38:39.600935   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:39.600853   15606 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:38:39.601042   15584 main.go:141] libmachine: (addons-857381) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381 ...
	I0930 19:38:39.601166   15584 main.go:141] libmachine: (addons-857381) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 19:38:39.601204   15584 main.go:141] libmachine: (addons-857381) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 19:38:39.863167   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:39.863034   15606 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa...
	I0930 19:38:40.117906   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:40.117761   15606 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/addons-857381.rawdisk...
	I0930 19:38:40.117931   15584 main.go:141] libmachine: (addons-857381) DBG | Writing magic tar header
	I0930 19:38:40.117940   15584 main.go:141] libmachine: (addons-857381) DBG | Writing SSH key tar header
	I0930 19:38:40.117948   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:40.117879   15606 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381 ...
	I0930 19:38:40.117964   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381
	I0930 19:38:40.118020   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 19:38:40.118027   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:38:40.118038   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381 (perms=drwx------)
	I0930 19:38:40.118045   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 19:38:40.118053   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 19:38:40.118058   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 19:38:40.118064   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 19:38:40.118074   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 19:38:40.118079   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins
	I0930 19:38:40.118085   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 19:38:40.118093   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 19:38:40.118098   15584 main.go:141] libmachine: (addons-857381) Creating domain...
	I0930 19:38:40.118103   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home
	I0930 19:38:40.118110   15584 main.go:141] libmachine: (addons-857381) DBG | Skipping /home - not owner
	I0930 19:38:40.119243   15584 main.go:141] libmachine: (addons-857381) define libvirt domain using xml: 
	I0930 19:38:40.119278   15584 main.go:141] libmachine: (addons-857381) <domain type='kvm'>
	I0930 19:38:40.119287   15584 main.go:141] libmachine: (addons-857381)   <name>addons-857381</name>
	I0930 19:38:40.119298   15584 main.go:141] libmachine: (addons-857381)   <memory unit='MiB'>4000</memory>
	I0930 19:38:40.119306   15584 main.go:141] libmachine: (addons-857381)   <vcpu>2</vcpu>
	I0930 19:38:40.119317   15584 main.go:141] libmachine: (addons-857381)   <features>
	I0930 19:38:40.119329   15584 main.go:141] libmachine: (addons-857381)     <acpi/>
	I0930 19:38:40.119339   15584 main.go:141] libmachine: (addons-857381)     <apic/>
	I0930 19:38:40.119347   15584 main.go:141] libmachine: (addons-857381)     <pae/>
	I0930 19:38:40.119350   15584 main.go:141] libmachine: (addons-857381)     
	I0930 19:38:40.119355   15584 main.go:141] libmachine: (addons-857381)   </features>
	I0930 19:38:40.119360   15584 main.go:141] libmachine: (addons-857381)   <cpu mode='host-passthrough'>
	I0930 19:38:40.119365   15584 main.go:141] libmachine: (addons-857381)   
	I0930 19:38:40.119373   15584 main.go:141] libmachine: (addons-857381)   </cpu>
	I0930 19:38:40.119378   15584 main.go:141] libmachine: (addons-857381)   <os>
	I0930 19:38:40.119383   15584 main.go:141] libmachine: (addons-857381)     <type>hvm</type>
	I0930 19:38:40.119387   15584 main.go:141] libmachine: (addons-857381)     <boot dev='cdrom'/>
	I0930 19:38:40.119394   15584 main.go:141] libmachine: (addons-857381)     <boot dev='hd'/>
	I0930 19:38:40.119399   15584 main.go:141] libmachine: (addons-857381)     <bootmenu enable='no'/>
	I0930 19:38:40.119402   15584 main.go:141] libmachine: (addons-857381)   </os>
	I0930 19:38:40.119407   15584 main.go:141] libmachine: (addons-857381)   <devices>
	I0930 19:38:40.119412   15584 main.go:141] libmachine: (addons-857381)     <disk type='file' device='cdrom'>
	I0930 19:38:40.119420   15584 main.go:141] libmachine: (addons-857381)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/boot2docker.iso'/>
	I0930 19:38:40.119431   15584 main.go:141] libmachine: (addons-857381)       <target dev='hdc' bus='scsi'/>
	I0930 19:38:40.119436   15584 main.go:141] libmachine: (addons-857381)       <readonly/>
	I0930 19:38:40.119440   15584 main.go:141] libmachine: (addons-857381)     </disk>
	I0930 19:38:40.119447   15584 main.go:141] libmachine: (addons-857381)     <disk type='file' device='disk'>
	I0930 19:38:40.119453   15584 main.go:141] libmachine: (addons-857381)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 19:38:40.119460   15584 main.go:141] libmachine: (addons-857381)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/addons-857381.rawdisk'/>
	I0930 19:38:40.119467   15584 main.go:141] libmachine: (addons-857381)       <target dev='hda' bus='virtio'/>
	I0930 19:38:40.119472   15584 main.go:141] libmachine: (addons-857381)     </disk>
	I0930 19:38:40.119476   15584 main.go:141] libmachine: (addons-857381)     <interface type='network'>
	I0930 19:38:40.119482   15584 main.go:141] libmachine: (addons-857381)       <source network='mk-addons-857381'/>
	I0930 19:38:40.119497   15584 main.go:141] libmachine: (addons-857381)       <model type='virtio'/>
	I0930 19:38:40.119547   15584 main.go:141] libmachine: (addons-857381)     </interface>
	I0930 19:38:40.119585   15584 main.go:141] libmachine: (addons-857381)     <interface type='network'>
	I0930 19:38:40.119615   15584 main.go:141] libmachine: (addons-857381)       <source network='default'/>
	I0930 19:38:40.119632   15584 main.go:141] libmachine: (addons-857381)       <model type='virtio'/>
	I0930 19:38:40.119647   15584 main.go:141] libmachine: (addons-857381)     </interface>
	I0930 19:38:40.119657   15584 main.go:141] libmachine: (addons-857381)     <serial type='pty'>
	I0930 19:38:40.119668   15584 main.go:141] libmachine: (addons-857381)       <target port='0'/>
	I0930 19:38:40.119681   15584 main.go:141] libmachine: (addons-857381)     </serial>
	I0930 19:38:40.119692   15584 main.go:141] libmachine: (addons-857381)     <console type='pty'>
	I0930 19:38:40.119705   15584 main.go:141] libmachine: (addons-857381)       <target type='serial' port='0'/>
	I0930 19:38:40.119716   15584 main.go:141] libmachine: (addons-857381)     </console>
	I0930 19:38:40.119728   15584 main.go:141] libmachine: (addons-857381)     <rng model='virtio'>
	I0930 19:38:40.119742   15584 main.go:141] libmachine: (addons-857381)       <backend model='random'>/dev/random</backend>
	I0930 19:38:40.119751   15584 main.go:141] libmachine: (addons-857381)     </rng>
	I0930 19:38:40.119764   15584 main.go:141] libmachine: (addons-857381)     
	I0930 19:38:40.119775   15584 main.go:141] libmachine: (addons-857381)     
	I0930 19:38:40.119787   15584 main.go:141] libmachine: (addons-857381)   </devices>
	I0930 19:38:40.119796   15584 main.go:141] libmachine: (addons-857381) </domain>
	I0930 19:38:40.119808   15584 main.go:141] libmachine: (addons-857381) 
	I0930 19:38:40.152290   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:13:e6:2a in network default
	I0930 19:38:40.152794   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:40.152807   15584 main.go:141] libmachine: (addons-857381) Ensuring networks are active...
	I0930 19:38:40.153769   15584 main.go:141] libmachine: (addons-857381) Ensuring network default is active
	I0930 19:38:40.154084   15584 main.go:141] libmachine: (addons-857381) Ensuring network mk-addons-857381 is active
	I0930 19:38:40.154622   15584 main.go:141] libmachine: (addons-857381) Getting domain xml...
	I0930 19:38:40.155306   15584 main.go:141] libmachine: (addons-857381) Creating domain...
	I0930 19:38:41.750138   15584 main.go:141] libmachine: (addons-857381) Waiting to get IP...
	I0930 19:38:41.750840   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:41.751228   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:41.751257   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:41.751208   15606 retry.go:31] will retry after 219.233908ms: waiting for machine to come up
	I0930 19:38:41.971647   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:41.972164   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:41.972188   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:41.972106   15606 retry.go:31] will retry after 262.030132ms: waiting for machine to come up
	I0930 19:38:42.235394   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:42.235857   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:42.235884   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:42.235807   15606 retry.go:31] will retry after 476.729894ms: waiting for machine to come up
	I0930 19:38:42.714621   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:42.715111   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:42.715165   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:42.715111   15606 retry.go:31] will retry after 585.557ms: waiting for machine to come up
	I0930 19:38:43.301755   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:43.302138   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:43.302170   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:43.302081   15606 retry.go:31] will retry after 660.338313ms: waiting for machine to come up
	I0930 19:38:43.963791   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:43.964219   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:43.964239   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:43.964181   15606 retry.go:31] will retry after 770.621107ms: waiting for machine to come up
	I0930 19:38:44.736897   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:44.737416   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:44.737436   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:44.737400   15606 retry.go:31] will retry after 934.807687ms: waiting for machine to come up
	I0930 19:38:45.673695   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:45.674163   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:45.674192   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:45.674131   15606 retry.go:31] will retry after 1.028873402s: waiting for machine to come up
	I0930 19:38:46.704659   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:46.705228   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:46.705252   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:46.705171   15606 retry.go:31] will retry after 1.355644802s: waiting for machine to come up
	I0930 19:38:48.062629   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:48.063045   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:48.063066   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:48.063003   15606 retry.go:31] will retry after 1.834607389s: waiting for machine to come up
	I0930 19:38:49.899481   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:49.899966   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:49.899993   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:49.899917   15606 retry.go:31] will retry after 2.552900967s: waiting for machine to come up
	I0930 19:38:52.455785   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:52.456329   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:52.456351   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:52.456275   15606 retry.go:31] will retry after 2.738603537s: waiting for machine to come up
	I0930 19:38:55.196845   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:55.197213   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:55.197249   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:55.197206   15606 retry.go:31] will retry after 2.960743363s: waiting for machine to come up
	I0930 19:38:58.161388   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:58.161803   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:58.161831   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:58.161744   15606 retry.go:31] will retry after 3.899735013s: waiting for machine to come up
	I0930 19:39:02.064849   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:02.065350   15584 main.go:141] libmachine: (addons-857381) Found IP for machine: 192.168.39.16
	I0930 19:39:02.065374   15584 main.go:141] libmachine: (addons-857381) Reserving static IP address...
	I0930 19:39:02.065387   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has current primary IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:02.065709   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find host DHCP lease matching {name: "addons-857381", mac: "52:54:00:2f:88:a1", ip: "192.168.39.16"} in network mk-addons-857381
	I0930 19:39:02.140991   15584 main.go:141] libmachine: (addons-857381) DBG | Getting to WaitForSSH function...
	I0930 19:39:02.141024   15584 main.go:141] libmachine: (addons-857381) Reserved static IP address: 192.168.39.16
	I0930 19:39:02.141038   15584 main.go:141] libmachine: (addons-857381) Waiting for SSH to be available...
	I0930 19:39:02.143380   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:02.143712   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381
	I0930 19:39:02.143736   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find defined IP address of network mk-addons-857381 interface with MAC address 52:54:00:2f:88:a1
	I0930 19:39:02.143945   15584 main.go:141] libmachine: (addons-857381) DBG | Using SSH client type: external
	I0930 19:39:02.143968   15584 main.go:141] libmachine: (addons-857381) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa (-rw-------)
	I0930 19:39:02.144015   15584 main.go:141] libmachine: (addons-857381) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 19:39:02.144040   15584 main.go:141] libmachine: (addons-857381) DBG | About to run SSH command:
	I0930 19:39:02.144056   15584 main.go:141] libmachine: (addons-857381) DBG | exit 0
	I0930 19:39:02.155805   15584 main.go:141] libmachine: (addons-857381) DBG | SSH cmd err, output: exit status 255: 
	I0930 19:39:02.155842   15584 main.go:141] libmachine: (addons-857381) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0930 19:39:02.155850   15584 main.go:141] libmachine: (addons-857381) DBG | command : exit 0
	I0930 19:39:02.155855   15584 main.go:141] libmachine: (addons-857381) DBG | err     : exit status 255
	I0930 19:39:02.155862   15584 main.go:141] libmachine: (addons-857381) DBG | output  : 
	I0930 19:39:05.156591   15584 main.go:141] libmachine: (addons-857381) DBG | Getting to WaitForSSH function...
	I0930 19:39:05.159112   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.159471   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.159499   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.159674   15584 main.go:141] libmachine: (addons-857381) DBG | Using SSH client type: external
	I0930 19:39:05.159702   15584 main.go:141] libmachine: (addons-857381) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa (-rw-------)
	I0930 19:39:05.159734   15584 main.go:141] libmachine: (addons-857381) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 19:39:05.159746   15584 main.go:141] libmachine: (addons-857381) DBG | About to run SSH command:
	I0930 19:39:05.159755   15584 main.go:141] libmachine: (addons-857381) DBG | exit 0
	I0930 19:39:05.283731   15584 main.go:141] libmachine: (addons-857381) DBG | SSH cmd err, output: <nil>: 
	I0930 19:39:05.283945   15584 main.go:141] libmachine: (addons-857381) KVM machine creation complete!
	I0930 19:39:05.284267   15584 main.go:141] libmachine: (addons-857381) Calling .GetConfigRaw
	I0930 19:39:05.284805   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:05.285019   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:05.285141   15584 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 19:39:05.285158   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:05.286683   15584 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 19:39:05.286697   15584 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 19:39:05.286701   15584 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 19:39:05.286707   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.288834   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.289132   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.289157   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.289280   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.289449   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.289572   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.289690   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.289873   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.290039   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.290050   15584 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 19:39:05.386984   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:39:05.387014   15584 main.go:141] libmachine: Detecting the provisioner...
	I0930 19:39:05.387029   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.389409   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.389748   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.389776   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.389917   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.390074   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.390198   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.390305   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.390448   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.390666   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.390682   15584 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 19:39:05.492417   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 19:39:05.492481   15584 main.go:141] libmachine: found compatible host: buildroot
	I0930 19:39:05.492489   15584 main.go:141] libmachine: Provisioning with buildroot...
	I0930 19:39:05.492500   15584 main.go:141] libmachine: (addons-857381) Calling .GetMachineName
	I0930 19:39:05.492732   15584 buildroot.go:166] provisioning hostname "addons-857381"
	I0930 19:39:05.492757   15584 main.go:141] libmachine: (addons-857381) Calling .GetMachineName
	I0930 19:39:05.492945   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.495929   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.496239   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.496305   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.496439   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.496644   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.496802   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.496952   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.497104   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.497271   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.497285   15584 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-857381 && echo "addons-857381" | sudo tee /etc/hostname
	I0930 19:39:05.609891   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-857381
	
	I0930 19:39:05.609922   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.612978   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.613698   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.613729   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.613907   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.614121   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.614279   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.614423   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.614594   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.614753   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.614769   15584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-857381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-857381/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-857381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 19:39:05.725738   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:39:05.725765   15584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 19:39:05.725804   15584 buildroot.go:174] setting up certificates
	I0930 19:39:05.725819   15584 provision.go:84] configureAuth start
	I0930 19:39:05.725827   15584 main.go:141] libmachine: (addons-857381) Calling .GetMachineName
	I0930 19:39:05.726168   15584 main.go:141] libmachine: (addons-857381) Calling .GetIP
	I0930 19:39:05.728742   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.729007   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.729035   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.729182   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.731678   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.732051   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.732081   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.732153   15584 provision.go:143] copyHostCerts
	I0930 19:39:05.732229   15584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 19:39:05.732358   15584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 19:39:05.732435   15584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 19:39:05.732484   15584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.addons-857381 san=[127.0.0.1 192.168.39.16 addons-857381 localhost minikube]
	I0930 19:39:05.797657   15584 provision.go:177] copyRemoteCerts
	I0930 19:39:05.797735   15584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 19:39:05.797762   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.800885   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.801217   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.801247   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.801400   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.801568   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.801718   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.801822   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:05.882191   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 19:39:05.905511   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 19:39:05.929051   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 19:39:05.954162   15584 provision.go:87] duration metric: took 228.330604ms to configureAuth
	I0930 19:39:05.954201   15584 buildroot.go:189] setting minikube options for container-runtime
	I0930 19:39:05.954387   15584 config.go:182] Loaded profile config "addons-857381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:39:05.954466   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.957503   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.957900   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.957927   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.958152   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.958347   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.958489   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.958608   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.958729   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.958887   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.958901   15584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 19:39:06.179208   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 19:39:06.179237   15584 main.go:141] libmachine: Checking connection to Docker...
	I0930 19:39:06.179248   15584 main.go:141] libmachine: (addons-857381) Calling .GetURL
	I0930 19:39:06.180601   15584 main.go:141] libmachine: (addons-857381) DBG | Using libvirt version 6000000
	I0930 19:39:06.182691   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.183033   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.183061   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.183191   15584 main.go:141] libmachine: Docker is up and running!
	I0930 19:39:06.183202   15584 main.go:141] libmachine: Reticulating splines...
	I0930 19:39:06.183209   15584 client.go:171] duration metric: took 27.051264777s to LocalClient.Create
	I0930 19:39:06.183231   15584 start.go:167] duration metric: took 27.051324774s to libmachine.API.Create "addons-857381"
	I0930 19:39:06.183242   15584 start.go:293] postStartSetup for "addons-857381" (driver="kvm2")
	I0930 19:39:06.183251   15584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 19:39:06.183266   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.183524   15584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 19:39:06.183571   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:06.185444   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.185797   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.185827   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.185919   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:06.186090   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.186188   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:06.186312   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:06.266715   15584 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 19:39:06.271185   15584 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 19:39:06.271215   15584 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 19:39:06.271287   15584 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 19:39:06.271309   15584 start.go:296] duration metric: took 88.062379ms for postStartSetup
	I0930 19:39:06.271349   15584 main.go:141] libmachine: (addons-857381) Calling .GetConfigRaw
	I0930 19:39:06.271937   15584 main.go:141] libmachine: (addons-857381) Calling .GetIP
	I0930 19:39:06.274448   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.274725   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.274750   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.274965   15584 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/config.json ...
	I0930 19:39:06.275129   15584 start.go:128] duration metric: took 27.161285737s to createHost
	I0930 19:39:06.275152   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:06.277424   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.277710   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.277737   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.277888   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:06.278053   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.278193   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.278321   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:06.278484   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:06.278724   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:06.278743   15584 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 19:39:06.380303   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727725146.359081243
	
	I0930 19:39:06.380326   15584 fix.go:216] guest clock: 1727725146.359081243
	I0930 19:39:06.380335   15584 fix.go:229] Guest: 2024-09-30 19:39:06.359081243 +0000 UTC Remote: 2024-09-30 19:39:06.275140075 +0000 UTC m=+27.266281521 (delta=83.941168ms)
	I0930 19:39:06.380381   15584 fix.go:200] guest clock delta is within tolerance: 83.941168ms
	I0930 19:39:06.380389   15584 start.go:83] releasing machines lock for "addons-857381", held for 27.266614473s
	I0930 19:39:06.380419   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.380674   15584 main.go:141] libmachine: (addons-857381) Calling .GetIP
	I0930 19:39:06.383237   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.383611   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.383640   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.383823   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.384318   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.384453   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.384548   15584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 19:39:06.384593   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:06.384651   15584 ssh_runner.go:195] Run: cat /version.json
	I0930 19:39:06.384672   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:06.387480   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.387761   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.387940   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.387970   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.388102   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:06.388230   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.388258   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.388321   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.388433   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:06.388508   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:06.388576   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.388649   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:06.388688   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:06.388794   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:06.460622   15584 ssh_runner.go:195] Run: systemctl --version
	I0930 19:39:06.504333   15584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 19:39:06.659157   15584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 19:39:06.665831   15584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 19:39:06.665921   15584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 19:39:06.682297   15584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 19:39:06.682332   15584 start.go:495] detecting cgroup driver to use...
	I0930 19:39:06.682422   15584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 19:39:06.698736   15584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 19:39:06.713403   15584 docker.go:217] disabling cri-docker service (if available) ...
	I0930 19:39:06.713463   15584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 19:39:06.727772   15584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 19:39:06.741754   15584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 19:39:06.854558   15584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 19:39:07.016805   15584 docker.go:233] disabling docker service ...
	I0930 19:39:07.016868   15584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 19:39:07.031392   15584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 19:39:07.044268   15584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 19:39:07.174815   15584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 19:39:07.288136   15584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 19:39:07.302494   15584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 19:39:07.320346   15584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 19:39:07.320397   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.330567   15584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 19:39:07.330642   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.340540   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.351066   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.361313   15584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 19:39:07.372112   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.382428   15584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.398996   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.409216   15584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 19:39:07.418760   15584 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 19:39:07.418816   15584 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 19:39:07.433137   15584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 19:39:07.442882   15584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:39:07.558112   15584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 19:39:07.649794   15584 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 19:39:07.649899   15584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 19:39:07.654623   15584 start.go:563] Will wait 60s for crictl version
	I0930 19:39:07.654704   15584 ssh_runner.go:195] Run: which crictl
	I0930 19:39:07.658191   15584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 19:39:07.700342   15584 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 19:39:07.700458   15584 ssh_runner.go:195] Run: crio --version
	I0930 19:39:07.727470   15584 ssh_runner.go:195] Run: crio --version
	I0930 19:39:07.754761   15584 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 19:39:07.756216   15584 main.go:141] libmachine: (addons-857381) Calling .GetIP
	I0930 19:39:07.758595   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:07.758998   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:07.759028   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:07.759215   15584 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 19:39:07.763302   15584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:39:07.775047   15584 kubeadm.go:883] updating cluster {Name:addons-857381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 19:39:07.775168   15584 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:39:07.775210   15584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:39:07.807313   15584 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 19:39:07.807388   15584 ssh_runner.go:195] Run: which lz4
	I0930 19:39:07.811181   15584 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 19:39:07.815355   15584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 19:39:07.815401   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 19:39:09.011857   15584 crio.go:462] duration metric: took 1.20070674s to copy over tarball
	I0930 19:39:09.011922   15584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 19:39:11.156167   15584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.144208659s)
	I0930 19:39:11.156197   15584 crio.go:469] duration metric: took 2.144313315s to extract the tarball
	I0930 19:39:11.156204   15584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 19:39:11.192433   15584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:39:11.233108   15584 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 19:39:11.233132   15584 cache_images.go:84] Images are preloaded, skipping loading
	I0930 19:39:11.233139   15584 kubeadm.go:934] updating node { 192.168.39.16 8443 v1.31.1 crio true true} ...
	I0930 19:39:11.233269   15584 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-857381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 19:39:11.233352   15584 ssh_runner.go:195] Run: crio config
	I0930 19:39:11.277191   15584 cni.go:84] Creating CNI manager for ""
	I0930 19:39:11.277215   15584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 19:39:11.277225   15584 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 19:39:11.277248   15584 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-857381 NodeName:addons-857381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 19:39:11.277363   15584 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-857381"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 19:39:11.277418   15584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 19:39:11.286642   15584 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 19:39:11.286704   15584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 19:39:11.295548   15584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0930 19:39:11.311549   15584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 19:39:11.331985   15584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0930 19:39:11.348728   15584 ssh_runner.go:195] Run: grep 192.168.39.16	control-plane.minikube.internal$ /etc/hosts
	I0930 19:39:11.352327   15584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:39:11.364401   15584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:39:11.481660   15584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 19:39:11.497079   15584 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381 for IP: 192.168.39.16
	I0930 19:39:11.497100   15584 certs.go:194] generating shared ca certs ...
	I0930 19:39:11.497116   15584 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.497260   15584 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 19:39:11.648998   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt ...
	I0930 19:39:11.649025   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt: {Name:mk6e5f82ec05fd1020277cb50e5cfcc0dabcacae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.649213   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key ...
	I0930 19:39:11.649229   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key: {Name:mk0ef923818a162097b78148b543208a914b5bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.649322   15584 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 19:39:11.753260   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt ...
	I0930 19:39:11.753290   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt: {Name:mke9d528b1a86f83c00d6802b8724e9dc7fcbf2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.753464   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key ...
	I0930 19:39:11.753479   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key: {Name:mk8d6f919cfde9b2ba252ed4e645dd7abe933692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.753574   15584 certs.go:256] generating profile certs ...
	I0930 19:39:11.753638   15584 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.key
	I0930 19:39:11.753663   15584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt with IP's: []
	I0930 19:39:11.993825   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt ...
	I0930 19:39:11.993862   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: {Name:mkfdecb09e1eaad0bf5d023250541bd133526bf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.994031   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.key ...
	I0930 19:39:11.994043   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.key: {Name:mk5b3d09b580d0cb32db7795505ff42b338bebcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.994106   15584 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key.2630616d
	I0930 19:39:11.994123   15584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt.2630616d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.16]
	I0930 19:39:12.123421   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt.2630616d ...
	I0930 19:39:12.123454   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt.2630616d: {Name:mk0c51fdbf5c30101d513ddc20b36e402092303f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:12.123638   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key.2630616d ...
	I0930 19:39:12.123655   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key.2630616d: {Name:mk22e6929637babbf135e841e671bfe79d76bb0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:12.123725   15584 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt.2630616d -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt
	I0930 19:39:12.123793   15584 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key.2630616d -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key
	I0930 19:39:12.123839   15584 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.key
	I0930 19:39:12.123854   15584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.crt with IP's: []
	I0930 19:39:12.195319   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.crt ...
	I0930 19:39:12.195350   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.crt: {Name:mk713b9e40199aa6c8687b380ad01559be53ec34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:12.195497   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.key ...
	I0930 19:39:12.195507   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.key: {Name:mkea90975034f67fe95bb6a85ec32c0ef43e68e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:12.195696   15584 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 19:39:12.195729   15584 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 19:39:12.195751   15584 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 19:39:12.195774   15584 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 19:39:12.196294   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 19:39:12.223952   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 19:39:12.246370   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 19:39:12.279886   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 19:39:12.303029   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0930 19:39:12.325838   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 19:39:12.349163   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 19:39:12.372806   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 19:39:12.396187   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 19:39:12.420192   15584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 19:39:12.436976   15584 ssh_runner.go:195] Run: openssl version
	I0930 19:39:12.442204   15584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 19:39:12.452601   15584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:39:12.456833   15584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:39:12.456888   15584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:39:12.462315   15584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 19:39:12.472654   15584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 19:39:12.476710   15584 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 19:39:12.476772   15584 kubeadm.go:392] StartCluster: {Name:addons-857381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:39:12.476843   15584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 19:39:12.476890   15584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 19:39:12.509454   15584 cri.go:89] found id: ""
	I0930 19:39:12.509518   15584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 19:39:12.519690   15584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 19:39:12.528634   15584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 19:39:12.537558   15584 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 19:39:12.537580   15584 kubeadm.go:157] found existing configuration files:
	
	I0930 19:39:12.537627   15584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 19:39:12.546562   15584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 19:39:12.546615   15584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 19:39:12.555210   15584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 19:39:12.563709   15584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 19:39:12.563764   15584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 19:39:12.572594   15584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 19:39:12.580936   15584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 19:39:12.580987   15584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 19:39:12.589574   15584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 19:39:12.597837   15584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 19:39:12.597888   15584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 19:39:12.606734   15584 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 19:39:12.656495   15584 kubeadm.go:310] W0930 19:39:12.641183     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:39:12.657151   15584 kubeadm.go:310] W0930 19:39:12.642020     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:39:12.764273   15584 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 19:39:22.111607   15584 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 19:39:22.111685   15584 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 19:39:22.111776   15584 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 19:39:22.111893   15584 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 19:39:22.112027   15584 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 19:39:22.112104   15584 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 19:39:22.113710   15584 out.go:235]   - Generating certificates and keys ...
	I0930 19:39:22.113790   15584 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 19:39:22.113862   15584 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 19:39:22.113958   15584 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 19:39:22.114050   15584 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 19:39:22.114143   15584 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 19:39:22.114222   15584 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 19:39:22.114302   15584 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 19:39:22.114414   15584 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-857381 localhost] and IPs [192.168.39.16 127.0.0.1 ::1]
	I0930 19:39:22.114460   15584 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 19:39:22.114592   15584 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-857381 localhost] and IPs [192.168.39.16 127.0.0.1 ::1]
	I0930 19:39:22.114664   15584 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 19:39:22.114748   15584 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 19:39:22.114814   15584 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 19:39:22.114901   15584 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 19:39:22.114973   15584 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 19:39:22.115058   15584 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 19:39:22.115139   15584 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 19:39:22.115211   15584 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 19:39:22.115281   15584 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 19:39:22.115360   15584 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 19:39:22.115417   15584 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 19:39:22.116907   15584 out.go:235]   - Booting up control plane ...
	I0930 19:39:22.116999   15584 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 19:39:22.117066   15584 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 19:39:22.117129   15584 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 19:39:22.117234   15584 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 19:39:22.117369   15584 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 19:39:22.117427   15584 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 19:39:22.117597   15584 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 19:39:22.117746   15584 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 19:39:22.117827   15584 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.864878ms
	I0930 19:39:22.117935   15584 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 19:39:22.118041   15584 kubeadm.go:310] [api-check] The API server is healthy after 5.00170551s
	I0930 19:39:22.118221   15584 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 19:39:22.118406   15584 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 19:39:22.118481   15584 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 19:39:22.118679   15584 kubeadm.go:310] [mark-control-plane] Marking the node addons-857381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 19:39:22.118753   15584 kubeadm.go:310] [bootstrap-token] Using token: 2zqthc.qj6bpwsk1i25jfw6
	I0930 19:39:22.120480   15584 out.go:235]   - Configuring RBAC rules ...
	I0930 19:39:22.120608   15584 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 19:39:22.120680   15584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 19:39:22.120802   15584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 19:39:22.120917   15584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 19:39:22.121021   15584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 19:39:22.121095   15584 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 19:39:22.121200   15584 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 19:39:22.121239   15584 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 19:39:22.121286   15584 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 19:39:22.121292   15584 kubeadm.go:310] 
	I0930 19:39:22.121363   15584 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 19:39:22.121375   15584 kubeadm.go:310] 
	I0930 19:39:22.121489   15584 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 19:39:22.121521   15584 kubeadm.go:310] 
	I0930 19:39:22.121561   15584 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 19:39:22.121648   15584 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 19:39:22.121728   15584 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 19:39:22.121740   15584 kubeadm.go:310] 
	I0930 19:39:22.121818   15584 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 19:39:22.121825   15584 kubeadm.go:310] 
	I0930 19:39:22.121895   15584 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 19:39:22.121904   15584 kubeadm.go:310] 
	I0930 19:39:22.121982   15584 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 19:39:22.122058   15584 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 19:39:22.122127   15584 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 19:39:22.122134   15584 kubeadm.go:310] 
	I0930 19:39:22.122209   15584 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 19:39:22.122279   15584 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 19:39:22.122285   15584 kubeadm.go:310] 
	I0930 19:39:22.122360   15584 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2zqthc.qj6bpwsk1i25jfw6 \
	I0930 19:39:22.122450   15584 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 19:39:22.122473   15584 kubeadm.go:310] 	--control-plane 
	I0930 19:39:22.122482   15584 kubeadm.go:310] 
	I0930 19:39:22.122556   15584 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 19:39:22.122562   15584 kubeadm.go:310] 
	I0930 19:39:22.122633   15584 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2zqthc.qj6bpwsk1i25jfw6 \
	I0930 19:39:22.122742   15584 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 19:39:22.122753   15584 cni.go:84] Creating CNI manager for ""
	I0930 19:39:22.122760   15584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 19:39:22.124276   15584 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 19:39:22.125392   15584 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 19:39:22.137298   15584 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 19:39:22.159047   15584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 19:39:22.159160   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:22.159174   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-857381 minikube.k8s.io/updated_at=2024_09_30T19_39_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=addons-857381 minikube.k8s.io/primary=true
	I0930 19:39:22.178203   15584 ops.go:34] apiserver oom_adj: -16
	I0930 19:39:22.298845   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:22.799840   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:23.299680   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:23.799875   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:24.298916   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:24.799796   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:25.299026   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:25.799660   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:25.868472   15584 kubeadm.go:1113] duration metric: took 3.709383377s to wait for elevateKubeSystemPrivileges
	I0930 19:39:25.868505   15584 kubeadm.go:394] duration metric: took 13.391737223s to StartCluster
	I0930 19:39:25.868523   15584 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:25.868662   15584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:39:25.869112   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:25.869296   15584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 19:39:25.869324   15584 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 19:39:25.869370   15584 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0930 19:39:25.869469   15584 addons.go:69] Setting gcp-auth=true in profile "addons-857381"
	I0930 19:39:25.869486   15584 addons.go:69] Setting ingress-dns=true in profile "addons-857381"
	I0930 19:39:25.869501   15584 addons.go:234] Setting addon ingress-dns=true in "addons-857381"
	I0930 19:39:25.869494   15584 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-857381"
	I0930 19:39:25.869513   15584 addons.go:69] Setting registry=true in profile "addons-857381"
	I0930 19:39:25.869513   15584 addons.go:69] Setting cloud-spanner=true in profile "addons-857381"
	I0930 19:39:25.869525   15584 addons.go:69] Setting metrics-server=true in profile "addons-857381"
	I0930 19:39:25.869535   15584 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-857381"
	I0930 19:39:25.869536   15584 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-857381"
	I0930 19:39:25.869543   15584 addons.go:234] Setting addon cloud-spanner=true in "addons-857381"
	I0930 19:39:25.869551   15584 addons.go:69] Setting inspektor-gadget=true in profile "addons-857381"
	I0930 19:39:25.869553   15584 addons.go:69] Setting volumesnapshots=true in profile "addons-857381"
	I0930 19:39:25.869554   15584 addons.go:69] Setting storage-provisioner=true in profile "addons-857381"
	I0930 19:39:25.869565   15584 addons.go:234] Setting addon inspektor-gadget=true in "addons-857381"
	I0930 19:39:25.869565   15584 addons.go:234] Setting addon volumesnapshots=true in "addons-857381"
	I0930 19:39:25.869582   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869588   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869601   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869505   15584 mustload.go:65] Loading cluster: addons-857381
	I0930 19:39:25.869549   15584 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-857381"
	I0930 19:39:25.869775   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869847   15584 config.go:182] Loaded profile config "addons-857381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:39:25.870033   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870035   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870078   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870100   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.869567   15584 addons.go:234] Setting addon storage-provisioner=true in "addons-857381"
	I0930 19:39:25.870132   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870145   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869529   15584 addons.go:234] Setting addon registry=true in "addons-857381"
	I0930 19:39:25.870175   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870197   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870083   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870195   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869511   15584 config.go:182] Loaded profile config "addons-857381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:39:25.870526   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.869544   15584 addons.go:69] Setting volcano=true in profile "addons-857381"
	I0930 19:39:25.870546   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870557   15584 addons.go:234] Setting addon volcano=true in "addons-857381"
	I0930 19:39:25.870583   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869482   15584 addons.go:69] Setting ingress=true in profile "addons-857381"
	I0930 19:39:25.870706   15584 addons.go:234] Setting addon ingress=true in "addons-857381"
	I0930 19:39:25.870739   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.870748   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870773   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870897   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870911   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.871085   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.871115   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.869473   15584 addons.go:69] Setting yakd=true in profile "addons-857381"
	I0930 19:39:25.871269   15584 addons.go:234] Setting addon yakd=true in "addons-857381"
	I0930 19:39:25.871297   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869520   15584 addons.go:69] Setting default-storageclass=true in profile "addons-857381"
	I0930 19:39:25.871410   15584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-857381"
	I0930 19:39:25.871679   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.871704   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.869539   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869545   15584 addons.go:234] Setting addon metrics-server=true in "addons-857381"
	I0930 19:39:25.871938   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.872087   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.872111   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.872268   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.872297   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.869546   15584 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-857381"
	I0930 19:39:25.869552   15584 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-857381"
	I0930 19:39:25.870118   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.873240   15584 out.go:177] * Verifying Kubernetes components...
	I0930 19:39:25.874824   15584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:39:25.875031   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.875068   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870165   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.875837   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.891609   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I0930 19:39:25.891622   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0930 19:39:25.892198   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.892648   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.892839   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.892856   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.892958   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34113
	I0930 19:39:25.893205   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.893224   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.893339   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.893526   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.893609   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.893925   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.893942   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.893985   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.894012   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.894209   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.894231   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.894604   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.896401   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32887
	I0930 19:39:25.901911   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34897
	I0930 19:39:25.908027   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.908062   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.908658   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.908681   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.910137   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36075
	I0930 19:39:25.910232   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38099
	I0930 19:39:25.910381   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.910420   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.910689   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.910814   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.910889   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.911356   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.911384   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.911518   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.911547   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.911704   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.911720   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.911760   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35065
	I0930 19:39:25.912108   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.912153   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.912245   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.912754   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.912787   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.913013   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.913047   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.913204   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.913221   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.913281   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.913621   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.914224   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.914247   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.919833   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.920758   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.920793   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.928106   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.928373   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.930483   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.930920   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.930971   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.943442   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34069
	I0930 19:39:25.946158   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0930 19:39:25.946301   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42649
	I0930 19:39:25.946399   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.947919   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.947941   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.948022   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.948109   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37203
	I0930 19:39:25.948121   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.948168   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.948220   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0930 19:39:25.948395   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45111
	I0930 19:39:25.949364   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.949469   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.949482   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.949486   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.949535   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.950004   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.950017   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.950055   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.950147   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.950154   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.950161   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.950173   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.950552   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.950566   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.950629   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.951116   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.951576   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.951610   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.951746   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.951981   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.952074   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.952099   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.952588   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.953272   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.953294   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.953679   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.953882   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.954158   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.954184   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.954412   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.955485   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.955737   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:25.955751   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:25.955806   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.956180   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:25.956201   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:25.956207   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:25.956216   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:25.957588   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:25.957390   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0930 19:39:25.957452   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41277
	I0930 19:39:25.957946   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:25.957983   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:25.957992   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	W0930 19:39:25.958081   15584 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0930 19:39:25.958401   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.958881   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.958900   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.958987   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42313
	I0930 19:39:25.959289   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.959314   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.959474   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0930 19:39:25.959492   15584 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0930 19:39:25.959513   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.959875   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.959897   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.960126   15584 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0930 19:39:25.960524   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.960672   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.961838   15584 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0930 19:39:25.961855   15584 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0930 19:39:25.961885   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.962881   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.962921   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.965353   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.967465   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.967720   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.967752   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.967998   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.968211   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.968229   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.968253   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.968412   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.968456   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.968558   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:25.968871   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.969023   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.969358   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:25.969828   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I0930 19:39:25.971542   15584 addons.go:234] Setting addon default-storageclass=true in "addons-857381"
	I0930 19:39:25.971578   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.971945   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.971965   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.973722   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0930 19:39:25.974115   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45175
	I0930 19:39:25.974519   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.974915   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.975095   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.975108   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.975433   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.975634   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.975824   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.976012   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.976033   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.976430   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.976444   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.976501   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.976683   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.977028   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.977624   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.977661   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.977877   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.979689   15584 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-857381"
	I0930 19:39:25.979733   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.980117   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.980151   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.981658   15584 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0930 19:39:25.982583   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I0930 19:39:25.983098   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.983567   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43789
	I0930 19:39:25.983865   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.983878   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.984274   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.984379   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.984563   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.984759   15584 out.go:177]   - Using image docker.io/registry:2.8.3
	I0930 19:39:25.984836   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.984863   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.985186   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.985334   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.986318   15584 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0930 19:39:25.986335   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0930 19:39:25.986353   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.987060   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.987776   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0930 19:39:25.988280   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.988862   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.988877   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.988935   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.989074   15584 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0930 19:39:25.989812   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.990023   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.990033   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.990473   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.990510   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.990574   15584 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 19:39:25.990597   15584 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 19:39:25.990617   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.991173   15584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 19:39:25.991455   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.991620   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.991751   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.991860   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:25.993542   15584 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 19:39:25.993741   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 19:39:25.993761   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.993705   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.994528   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.995054   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.995071   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.995363   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.995558   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.995716   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.995862   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:25.996207   15584 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0930 19:39:25.997530   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.997597   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0930 19:39:25.997617   15584 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0930 19:39:25.997635   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.997905   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.997931   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.998174   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.998350   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.998496   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.998614   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.001113   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.001606   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.001633   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.001819   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.001978   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.002102   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.002213   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.002507   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35615
	I0930 19:39:26.003016   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.003573   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.003590   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.004001   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.004290   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.007901   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
	I0930 19:39:26.007985   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I0930 19:39:26.008624   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.009653   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0930 19:39:26.010668   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.010726   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.011079   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.011091   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.011295   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.011657   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.011732   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.011763   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.012575   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.012669   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0930 19:39:26.012829   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.013000   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.013407   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.013606   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.013621   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.013968   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.014049   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.014065   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.014119   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.014353   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.014494   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.014944   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.015656   15584 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0930 19:39:26.016134   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.016798   15584 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0930 19:39:26.017425   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.017622   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34427
	I0930 19:39:26.017897   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.018270   15584 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0930 19:39:26.018286   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0930 19:39:26.018301   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.018271   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.018352   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.018646   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.018937   15584 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0930 19:39:26.018974   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0930 19:39:26.019146   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:26.019175   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:26.019458   15584 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 19:39:26.019469   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0930 19:39:26.019480   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.022308   15584 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 19:39:26.022318   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0930 19:39:26.022462   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.023468   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.023512   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.023547   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.023574   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40001
	I0930 19:39:26.023698   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.023999   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.024081   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.024161   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.024178   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.024276   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.024400   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.024502   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.024632   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.025111   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0930 19:39:26.025197   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.025201   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.025212   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.025377   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.025647   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.025709   15584 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 19:39:26.025818   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.026733   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38173
	I0930 19:39:26.027178   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.028031   15584 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 19:39:26.028049   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0930 19:39:26.028119   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.028131   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.028181   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.028202   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.028442   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0930 19:39:26.029148   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.029701   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:26.029741   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:26.030064   15584 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0930 19:39:26.031125   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.031427   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0930 19:39:26.031525   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.031567   15584 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 19:39:26.031571   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.031579   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0930 19:39:26.031598   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.031737   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.031852   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.032014   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.032136   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.034693   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0930 19:39:26.035043   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.035464   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.035521   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.035730   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.035883   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.035993   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.036170   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.037151   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0930 19:39:26.038304   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0930 19:39:26.039572   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0930 19:39:26.039593   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0930 19:39:26.039616   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.042725   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.043135   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.043161   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.043322   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.043504   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.043649   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.043779   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.046214   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42533
	I0930 19:39:26.046708   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.047211   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.047230   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.047643   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34181
	I0930 19:39:26.047658   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.047829   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.048012   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.048450   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.048463   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.048874   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.049079   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.049587   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.049871   15584 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 19:39:26.049894   15584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 19:39:26.049910   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.050844   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.053693   15584 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0930 19:39:26.053892   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.054150   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.054175   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.054350   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.054606   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.054743   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.054898   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.057159   15584 out.go:177]   - Using image docker.io/busybox:stable
	I0930 19:39:26.058444   15584 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 19:39:26.058456   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0930 19:39:26.058471   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	W0930 19:39:26.058658   15584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34418->192.168.39.16:22: read: connection reset by peer
	I0930 19:39:26.058676   15584 retry.go:31] will retry after 237.78819ms: ssh: handshake failed: read tcp 192.168.39.1:34418->192.168.39.16:22: read: connection reset by peer
	I0930 19:39:26.061619   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.061962   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.062006   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.062106   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.062224   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.062300   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.062361   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	W0930 19:39:26.065959   15584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34426->192.168.39.16:22: read: connection reset by peer
	I0930 19:39:26.065979   15584 retry.go:31] will retry after 167.277624ms: ssh: handshake failed: read tcp 192.168.39.1:34426->192.168.39.16:22: read: connection reset by peer
	I0930 19:39:26.339466   15584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 19:39:26.339517   15584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 19:39:26.403846   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0930 19:39:26.403877   15584 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0930 19:39:26.418875   15584 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0930 19:39:26.418902   15584 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0930 19:39:26.444724   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 19:39:26.469397   15584 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0930 19:39:26.469428   15584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0930 19:39:26.470418   15584 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 19:39:26.470454   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0930 19:39:26.484974   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0930 19:39:26.490665   15584 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0930 19:39:26.490690   15584 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0930 19:39:26.517120   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 19:39:26.544379   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 19:39:26.563968   15584 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0930 19:39:26.563993   15584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0930 19:39:26.604180   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0930 19:39:26.604208   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0930 19:39:26.620313   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 19:39:26.672698   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0930 19:39:26.672723   15584 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0930 19:39:26.688307   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 19:39:26.714792   15584 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0930 19:39:26.714816   15584 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0930 19:39:26.728893   15584 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 19:39:26.728920   15584 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 19:39:26.744719   15584 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0930 19:39:26.744745   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0930 19:39:26.842193   15584 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0930 19:39:26.842218   15584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0930 19:39:26.859317   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 19:39:26.899446   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0930 19:39:26.899471   15584 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0930 19:39:26.904707   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0930 19:39:26.904731   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0930 19:39:26.961885   15584 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0930 19:39:26.961904   15584 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0930 19:39:26.962165   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0930 19:39:26.962184   15584 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0930 19:39:26.977061   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0930 19:39:27.039064   15584 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 19:39:27.039095   15584 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 19:39:27.067135   15584 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 19:39:27.067165   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0930 19:39:27.144070   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0930 19:39:27.144093   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0930 19:39:27.181844   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 19:39:27.204338   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0930 19:39:27.204364   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0930 19:39:27.262301   15584 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0930 19:39:27.262328   15584 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0930 19:39:27.319423   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 19:39:27.366509   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0930 19:39:27.366531   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0930 19:39:27.474305   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0930 19:39:27.577560   15584 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0930 19:39:27.577589   15584 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0930 19:39:27.717753   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0930 19:39:27.717785   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0930 19:39:27.874602   15584 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0930 19:39:27.874633   15584 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0930 19:39:27.969590   15584 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0930 19:39:27.969615   15584 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0930 19:39:28.141702   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0930 19:39:28.141732   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0930 19:39:28.341745   15584 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 19:39:28.341776   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0930 19:39:28.455162   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0930 19:39:28.455188   15584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0930 19:39:28.678401   15584 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.338898628s)
	I0930 19:39:28.678417   15584 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.338851725s)
	I0930 19:39:28.678450   15584 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0930 19:39:28.679459   15584 node_ready.go:35] waiting up to 6m0s for node "addons-857381" to be "Ready" ...
	I0930 19:39:28.692964   15584 node_ready.go:49] node "addons-857381" has status "Ready":"True"
	I0930 19:39:28.693006   15584 node_ready.go:38] duration metric: took 13.512917ms for node "addons-857381" to be "Ready" ...
	I0930 19:39:28.693018   15584 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 19:39:28.694835   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 19:39:28.724666   15584 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace to be "Ready" ...
	I0930 19:39:28.817994   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0930 19:39:28.818022   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0930 19:39:29.132262   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0930 19:39:29.132290   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0930 19:39:29.194565   15584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-857381" context rescaled to 1 replicas
	I0930 19:39:29.322176   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 19:39:29.322196   15584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0930 19:39:29.581322   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 19:39:30.236110   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.751106656s)
	I0930 19:39:30.236157   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236166   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236216   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.719062545s)
	I0930 19:39:30.236266   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236287   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236293   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.691892299s)
	I0930 19:39:30.236308   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236318   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236701   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.236710   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.236724   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.236732   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.236735   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.236742   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236746   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.236750   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236752   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.236754   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236761   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.236770   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236772   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.236762   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236906   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.792152494s)
	I0930 19:39:30.236927   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236955   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.237054   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.237074   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.237097   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.237099   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.237107   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.237108   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.236777   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.238459   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.238460   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.238486   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.238495   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.238502   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.238496   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.238513   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.238523   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.238750   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.238766   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.238817   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.745068   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:32.778531   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:33.027172   15584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0930 19:39:33.027218   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:33.031039   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:33.031563   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:33.031606   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:33.031748   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:33.031947   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:33.032091   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:33.032216   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:33.310796   15584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0930 19:39:33.432989   15584 addons.go:234] Setting addon gcp-auth=true in "addons-857381"
	I0930 19:39:33.433075   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:33.433505   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:33.433542   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:33.450114   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
	I0930 19:39:33.450542   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:33.451073   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:33.451091   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:33.451989   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:33.452643   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:33.452678   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:33.467603   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0930 19:39:33.468080   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:33.468533   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:33.468552   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:33.468882   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:33.469131   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:33.470845   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:33.471095   15584 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0930 19:39:33.471131   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:33.473943   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:33.474399   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:33.474457   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:33.474555   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:33.474733   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:33.474879   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:33.475055   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:34.292964   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.672612289s)
	I0930 19:39:34.293018   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293031   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293110   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.604771882s)
	I0930 19:39:34.293148   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293160   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.433811665s)
	I0930 19:39:34.293184   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293196   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293161   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293304   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.111420616s)
	W0930 19:39:34.293345   15584 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 19:39:34.293376   15584 retry.go:31] will retry after 271.524616ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 19:39:34.293201   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.316113203s)
	I0930 19:39:34.293411   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293416   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.293425   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293425   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.293435   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.293443   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293449   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293531   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.293542   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.293553   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293561   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293579   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.293558   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.974102674s)
	I0930 19:39:34.293609   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.819279733s)
	I0930 19:39:34.293623   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293629   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293637   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293640   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293652   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.293625   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.293675   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.293680   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.293684   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293688   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.293692   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293697   15584 addons.go:475] Verifying addon ingress=true in "addons-857381"
	I0930 19:39:34.293758   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.598892526s)
	I0930 19:39:34.293777   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294035   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.294048   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.294075   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294081   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294089   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294095   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.294103   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294111   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294121   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294128   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.294135   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.294152   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294158   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294343   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.294367   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294374   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294390   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294397   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.294437   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.294456   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294462   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294469   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294482   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.295624   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.295658   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.295665   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.296494   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.296522   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.296528   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.296878   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.296887   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.296895   15584 addons.go:475] Verifying addon registry=true in "addons-857381"
	I0930 19:39:34.296919   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.296931   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.297440   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.297455   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.296941   15584 addons.go:475] Verifying addon metrics-server=true in "addons-857381"
	I0930 19:39:34.299354   15584 out.go:177] * Verifying ingress addon...
	I0930 19:39:34.299415   15584 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-857381 service yakd-dashboard -n yakd-dashboard
	
	I0930 19:39:34.299358   15584 out.go:177] * Verifying registry addon...
	I0930 19:39:34.301748   15584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0930 19:39:34.303967   15584 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0930 19:39:34.347114   15584 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 19:39:34.347135   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:34.347645   15584 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0930 19:39:34.347667   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:34.379293   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.379322   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.379589   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.379665   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.379683   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	W0930 19:39:34.379773   15584 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0930 19:39:34.391480   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.391514   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.391850   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.391871   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.565511   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 19:39:34.806600   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:34.810513   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:35.232349   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:35.308666   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:35.309108   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:35.828683   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.247295259s)
	I0930 19:39:35.828738   15584 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.357617005s)
	I0930 19:39:35.828744   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:35.828881   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:35.829247   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:35.829301   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:35.829316   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:35.829324   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:35.829631   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:35.829656   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:35.829663   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:35.829671   15584 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-857381"
	I0930 19:39:35.830414   15584 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0930 19:39:35.831442   15584 out.go:177] * Verifying csi-hostpath-driver addon...
	I0930 19:39:35.833074   15584 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 19:39:35.834046   15584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0930 19:39:35.834254   15584 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0930 19:39:35.834271   15584 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0930 19:39:35.839940   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:35.840343   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:35.847244   15584 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 19:39:35.847276   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:35.938617   15584 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0930 19:39:35.938652   15584 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0930 19:39:36.063928   15584 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 19:39:36.063961   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0930 19:39:36.120314   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 19:39:36.309391   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:36.314236   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:36.340348   15584 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 19:39:36.340371   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:36.804872   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.239314953s)
	I0930 19:39:36.804918   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:36.804933   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:36.805171   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:36.805189   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:36.805199   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:36.805208   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:36.805433   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:36.805454   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:36.967227   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:36.967460   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:36.967876   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:37.247223   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:37.307184   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:37.314533   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:37.345378   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:37.526802   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.406437983s)
	I0930 19:39:37.526855   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:37.526879   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:37.527198   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:37.527257   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:37.527271   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:37.527280   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:37.527210   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:37.527501   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:37.527522   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:37.529551   15584 addons.go:475] Verifying addon gcp-auth=true in "addons-857381"
	I0930 19:39:37.531033   15584 out.go:177] * Verifying gcp-auth addon...
	I0930 19:39:37.533661   15584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0930 19:39:37.562401   15584 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 19:39:37.562432   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:37.806737   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:37.809253   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:37.839020   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:38.038065   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:38.305905   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:38.309675   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:38.339300   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:38.537175   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:38.807194   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:38.808182   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:38.839444   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:39.038213   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:39.305965   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:39.307430   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:39.339933   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:39.538121   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:39.731775   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:39.806783   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:39.808801   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:39.839365   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:40.037438   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:40.306846   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:40.308993   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:40.338409   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:40.538055   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:40.806222   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:40.808300   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:40.843451   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:41.038963   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:41.227711   15584 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jn2h5" not found
	I0930 19:39:41.227748   15584 pod_ready.go:82] duration metric: took 12.503044527s for pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace to be "Ready" ...
	E0930 19:39:41.227761   15584 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jn2h5" not found
	I0930 19:39:41.227771   15584 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace to be "Ready" ...
	I0930 19:39:41.308109   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:41.309908   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:41.338978   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:41.537501   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:41.808520   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:41.809542   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:41.840311   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:42.148099   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:42.306741   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:42.308939   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:42.338534   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:42.537098   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:42.805061   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:42.807375   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:42.838837   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:43.037381   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:43.234216   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:43.305308   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:43.308022   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:43.339943   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:43.537233   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:43.805707   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:43.811783   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:43.839510   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:44.037858   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:44.306420   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:44.308934   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:44.338485   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:44.537622   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:44.806844   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:44.808702   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:44.838957   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:45.036848   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:45.234876   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:45.306328   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:45.308712   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:45.343763   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:45.536859   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:45.806211   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:45.808798   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:45.839561   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:46.037708   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:46.308046   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:46.308610   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:46.339634   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:46.537600   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:46.805549   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:46.807820   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:46.838167   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:47.037473   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:47.306050   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:47.308153   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:47.339967   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:47.537051   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:47.734887   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:47.813723   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:47.814301   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:47.840811   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:48.038333   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:48.311855   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:48.312416   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:48.341988   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:48.537651   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:48.806200   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:48.809450   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:48.838999   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:49.037711   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:49.305793   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:49.307907   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:49.339445   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:49.537409   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:49.806209   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:49.808533   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:49.839853   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:50.037854   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:50.234421   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:50.306910   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:50.308611   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:50.339584   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:50.546089   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:50.806461   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:50.808559   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:50.839824   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:51.037595   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:51.305471   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:51.308222   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:51.338416   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:51.537082   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:51.806079   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:51.809149   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:51.838774   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:52.037195   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:52.236908   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:52.307438   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:52.309988   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:52.339786   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:52.539520   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:52.807714   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:52.811031   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:52.839082   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:53.037682   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:53.305629   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:53.307981   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:53.338463   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:53.537098   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:53.806021   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:53.810331   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:53.838769   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:54.091895   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:54.306715   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:54.308449   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:54.338829   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:54.540280   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:54.734396   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:54.805806   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:54.808652   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:54.838947   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:55.037868   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:55.305594   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:55.308020   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:55.338849   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:55.537911   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:55.805987   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:55.808899   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:55.839439   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:56.038492   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:56.316176   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:56.316378   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:56.340370   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:56.538344   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:56.734461   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:56.806516   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:56.809839   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:56.839171   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:57.038430   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:57.305462   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:57.307742   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:57.340252   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:57.537058   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:57.806338   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:57.808421   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:57.839125   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:58.037542   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:58.306156   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:58.307603   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:58.339349   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:58.538543   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:58.734586   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:58.807381   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:58.809120   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:58.908109   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:59.037847   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:59.306124   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:59.307264   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:59.338804   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:59.537010   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:59.806260   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:59.808807   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:59.839439   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:00.036904   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:00.306219   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:00.308277   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:00.339116   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:00.538595   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:00.735277   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:40:00.808141   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:00.808374   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:00.838895   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:01.037765   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:01.306325   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:01.309240   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:01.338334   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:01.540483   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:01.805905   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:01.808599   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:01.856980   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:02.038458   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:02.306037   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:02.308480   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:02.338925   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:02.537489   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:02.806720   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:02.809311   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:02.839215   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:03.038706   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:03.235095   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:40:03.305605   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:03.308118   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:03.339088   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:03.537176   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:03.806049   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:03.808024   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:03.840285   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:04.047284   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:04.234184   15584 pod_ready.go:93] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.234214   15584 pod_ready.go:82] duration metric: took 23.006434066s for pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.234227   15584 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.238876   15584 pod_ready.go:93] pod "etcd-addons-857381" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.238896   15584 pod_ready.go:82] duration metric: took 4.661667ms for pod "etcd-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.238905   15584 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.243161   15584 pod_ready.go:93] pod "kube-apiserver-addons-857381" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.243185   15584 pod_ready.go:82] duration metric: took 4.272909ms for pod "kube-apiserver-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.243204   15584 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.247507   15584 pod_ready.go:93] pod "kube-controller-manager-addons-857381" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.247544   15584 pod_ready.go:82] duration metric: took 4.329628ms for pod "kube-controller-manager-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.247558   15584 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wgjdg" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.253066   15584 pod_ready.go:93] pod "kube-proxy-wgjdg" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.253097   15584 pod_ready.go:82] duration metric: took 5.523ms for pod "kube-proxy-wgjdg" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.253108   15584 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.305855   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:04.308368   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:04.338826   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:04.537032   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:04.632342   15584 pod_ready.go:93] pod "kube-scheduler-addons-857381" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.632365   15584 pod_ready.go:82] duration metric: took 379.250879ms for pod "kube-scheduler-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.632374   15584 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9vf5l" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.805742   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:04.808493   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:04.838704   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:05.032445   15584 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9vf5l" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:05.032469   15584 pod_ready.go:82] duration metric: took 400.088015ms for pod "nvidia-device-plugin-daemonset-9vf5l" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:05.032476   15584 pod_ready.go:39] duration metric: took 36.339446224s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 19:40:05.032494   15584 api_server.go:52] waiting for apiserver process to appear ...
	I0930 19:40:05.032544   15584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 19:40:05.037739   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:05.077269   15584 api_server.go:72] duration metric: took 39.20789395s to wait for apiserver process to appear ...
	I0930 19:40:05.077297   15584 api_server.go:88] waiting for apiserver healthz status ...
	I0930 19:40:05.077318   15584 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0930 19:40:05.081429   15584 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0930 19:40:05.082415   15584 api_server.go:141] control plane version: v1.31.1
	I0930 19:40:05.082441   15584 api_server.go:131] duration metric: took 5.135906ms to wait for apiserver health ...
	I0930 19:40:05.082450   15584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 19:40:05.248118   15584 system_pods.go:59] 17 kube-system pods found
	I0930 19:40:05.248151   15584 system_pods.go:61] "coredns-7c65d6cfc9-v2sl5" [7ef3332d-3ee7-4d76-bbef-2dfc99673515] Running
	I0930 19:40:05.248159   15584 system_pods.go:61] "csi-hostpath-attacher-0" [e77d98c4-0779-493d-b89f-2fbd4a41b6ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 19:40:05.248165   15584 system_pods.go:61] "csi-hostpath-resizer-0" [e32a8d15-973d-404b-9619-491fa27decc4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 19:40:05.248173   15584 system_pods.go:61] "csi-hostpathplugin-mlgws" [2f7276d7-5e87-4d2e-bd1a-6e104f3fd164] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 19:40:05.248178   15584 system_pods.go:61] "etcd-addons-857381" [74fe1626-8e74-435e-a2dd-f088265d04ac] Running
	I0930 19:40:05.248182   15584 system_pods.go:61] "kube-apiserver-addons-857381" [74358463-31fa-4b2f-ba36-4d0c4f5b03db] Running
	I0930 19:40:05.248185   15584 system_pods.go:61] "kube-controller-manager-addons-857381" [155182cf-78af-450c-923a-dfeb7b2a5358] Running
	I0930 19:40:05.248191   15584 system_pods.go:61] "kube-ingress-dns-minikube" [e1217c30-4e9c-43fa-a3f6-0a640781c5f8] Running
	I0930 19:40:05.248194   15584 system_pods.go:61] "kube-proxy-wgjdg" [b2646cb6-ecf8-4e44-9d48-b49eead7d727] Running
	I0930 19:40:05.248197   15584 system_pods.go:61] "kube-scheduler-addons-857381" [952cc18b-d292-4baa-8a03-dce05fdabe5c] Running
	I0930 19:40:05.248204   15584 system_pods.go:61] "metrics-server-84c5f94fbc-cdn25" [b344652c-decb-4b68-9eb4-dd034008cf98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 19:40:05.248207   15584 system_pods.go:61] "nvidia-device-plugin-daemonset-9vf5l" [f2848172-eec4-47cc-9e9d-36026e22b55c] Running
	I0930 19:40:05.248211   15584 system_pods.go:61] "registry-66c9cd494c-frqrv" [e66e6fb9-7274-4a0b-b787-c64abc8ffe04] Running
	I0930 19:40:05.248216   15584 system_pods.go:61] "registry-proxy-m2j7k" [cf0e9fcc-d5e3-4dd8-8337-406b07ab9495] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 19:40:05.248223   15584 system_pods.go:61] "snapshot-controller-56fcc65765-g26cx" [0a7563fa-d127-473c-b9a1-ece459d51ec0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 19:40:05.248256   15584 system_pods.go:61] "snapshot-controller-56fcc65765-vqjbn" [68d33976-a421-4696-83a7-303c2bf65ba3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 19:40:05.248264   15584 system_pods.go:61] "storage-provisioner" [cf253e6d-52dd-4bbf-a505-61269b1bb4d1] Running
	I0930 19:40:05.248271   15584 system_pods.go:74] duration metric: took 165.811366ms to wait for pod list to return data ...
	I0930 19:40:05.248282   15584 default_sa.go:34] waiting for default service account to be created ...
	I0930 19:40:05.319334   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:05.321630   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:05.349289   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:05.432684   15584 default_sa.go:45] found service account: "default"
	I0930 19:40:05.432711   15584 default_sa.go:55] duration metric: took 184.42325ms for default service account to be created ...
	I0930 19:40:05.432720   15584 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 19:40:05.537876   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:05.637325   15584 system_pods.go:86] 17 kube-system pods found
	I0930 19:40:05.637354   15584 system_pods.go:89] "coredns-7c65d6cfc9-v2sl5" [7ef3332d-3ee7-4d76-bbef-2dfc99673515] Running
	I0930 19:40:05.637363   15584 system_pods.go:89] "csi-hostpath-attacher-0" [e77d98c4-0779-493d-b89f-2fbd4a41b6ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 19:40:05.637368   15584 system_pods.go:89] "csi-hostpath-resizer-0" [e32a8d15-973d-404b-9619-491fa27decc4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 19:40:05.637376   15584 system_pods.go:89] "csi-hostpathplugin-mlgws" [2f7276d7-5e87-4d2e-bd1a-6e104f3fd164] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 19:40:05.637380   15584 system_pods.go:89] "etcd-addons-857381" [74fe1626-8e74-435e-a2dd-f088265d04ac] Running
	I0930 19:40:05.637384   15584 system_pods.go:89] "kube-apiserver-addons-857381" [74358463-31fa-4b2f-ba36-4d0c4f5b03db] Running
	I0930 19:40:05.637387   15584 system_pods.go:89] "kube-controller-manager-addons-857381" [155182cf-78af-450c-923a-dfeb7b2a5358] Running
	I0930 19:40:05.637392   15584 system_pods.go:89] "kube-ingress-dns-minikube" [e1217c30-4e9c-43fa-a3f6-0a640781c5f8] Running
	I0930 19:40:05.637395   15584 system_pods.go:89] "kube-proxy-wgjdg" [b2646cb6-ecf8-4e44-9d48-b49eead7d727] Running
	I0930 19:40:05.637399   15584 system_pods.go:89] "kube-scheduler-addons-857381" [952cc18b-d292-4baa-8a03-dce05fdabe5c] Running
	I0930 19:40:05.637405   15584 system_pods.go:89] "metrics-server-84c5f94fbc-cdn25" [b344652c-decb-4b68-9eb4-dd034008cf98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 19:40:05.637410   15584 system_pods.go:89] "nvidia-device-plugin-daemonset-9vf5l" [f2848172-eec4-47cc-9e9d-36026e22b55c] Running
	I0930 19:40:05.637416   15584 system_pods.go:89] "registry-66c9cd494c-frqrv" [e66e6fb9-7274-4a0b-b787-c64abc8ffe04] Running
	I0930 19:40:05.637423   15584 system_pods.go:89] "registry-proxy-m2j7k" [cf0e9fcc-d5e3-4dd8-8337-406b07ab9495] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 19:40:05.637433   15584 system_pods.go:89] "snapshot-controller-56fcc65765-g26cx" [0a7563fa-d127-473c-b9a1-ece459d51ec0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 19:40:05.637446   15584 system_pods.go:89] "snapshot-controller-56fcc65765-vqjbn" [68d33976-a421-4696-83a7-303c2bf65ba3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 19:40:05.637453   15584 system_pods.go:89] "storage-provisioner" [cf253e6d-52dd-4bbf-a505-61269b1bb4d1] Running
	I0930 19:40:05.637460   15584 system_pods.go:126] duration metric: took 204.735253ms to wait for k8s-apps to be running ...
	I0930 19:40:05.637471   15584 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 19:40:05.637512   15584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 19:40:05.664635   15584 system_svc.go:56] duration metric: took 27.157381ms WaitForService to wait for kubelet
	I0930 19:40:05.664667   15584 kubeadm.go:582] duration metric: took 39.795308561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 19:40:05.664684   15584 node_conditions.go:102] verifying NodePressure condition ...
	I0930 19:40:05.806621   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:05.809736   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:05.833501   15584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 19:40:05.833531   15584 node_conditions.go:123] node cpu capacity is 2
	I0930 19:40:05.833544   15584 node_conditions.go:105] duration metric: took 168.855642ms to run NodePressure ...
	I0930 19:40:05.833558   15584 start.go:241] waiting for startup goroutines ...
	I0930 19:40:05.838853   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:06.201378   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:06.305678   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:06.309215   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:06.338426   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:06.537088   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:06.805556   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:06.807670   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:06.837888   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:07.037594   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:07.306997   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:07.308373   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:07.339605   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:07.537323   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:07.806225   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:07.808962   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:07.840424   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:08.038714   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:08.315435   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:08.316984   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:08.338567   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:08.539077   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:08.806404   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:08.807794   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:08.838111   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:09.039411   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:09.306781   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:09.308706   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:09.338817   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:09.541907   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:09.806151   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:09.808679   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:09.839864   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:10.037757   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:10.306476   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:10.309294   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:10.338729   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:10.537365   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:10.806186   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:10.808553   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:10.838954   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:11.038197   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:11.305362   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:11.307868   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:11.338450   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:11.537023   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:11.805980   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:11.807997   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:11.838687   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:12.038101   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:12.305891   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:12.308058   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:12.338527   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:12.537006   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:12.805026   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:12.807440   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:12.838745   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:13.036973   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:13.316029   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:13.316819   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:13.339318   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:13.537656   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:13.806393   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:13.809221   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:13.838943   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:14.036710   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:14.305575   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:14.307510   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:14.339024   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:14.746118   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:14.805546   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:14.808182   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:14.839255   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:15.038456   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:15.306259   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:15.308763   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:15.338218   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:15.537663   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:15.806502   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:15.809322   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:15.838920   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:16.038201   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:16.305842   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:16.308119   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:16.338442   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:16.536865   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:16.806565   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:16.809083   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:16.839057   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:17.037476   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:17.306218   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:17.308220   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:17.338656   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:17.538612   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:17.806377   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:17.808904   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:17.838105   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:18.037920   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:18.306007   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:18.308381   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:18.338711   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:18.537393   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:18.806335   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:18.809582   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:18.840209   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:19.036945   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:19.306469   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:19.308307   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:19.338954   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:19.537674   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:19.806934   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:19.808546   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:19.839444   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:20.037215   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:20.305907   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:20.308689   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:20.339344   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:20.538374   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:20.808450   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:20.808767   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:20.839145   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:21.037658   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:21.306332   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:21.310114   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:21.341224   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:21.537216   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:21.806169   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:21.808637   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:21.842275   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:22.038267   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:22.305922   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:22.308301   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:22.342967   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:22.537729   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:22.810668   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:22.811005   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:22.839120   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:23.037454   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:23.306993   15584 kapi.go:107] duration metric: took 49.005242803s to wait for kubernetes.io/minikube-addons=registry ...
	I0930 19:40:23.308292   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:23.340880   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:23.537538   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:23.808649   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:23.838719   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:24.037027   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:24.311020   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:24.339930   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:24.537448   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:24.808165   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:24.840330   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:25.038012   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:25.310485   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:25.338594   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:25.537562   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:25.808768   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:25.840491   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:26.337884   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:26.339802   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:26.342878   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:26.538146   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:26.810441   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:26.911692   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:27.037138   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:27.307981   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:27.338514   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:27.537541   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:27.808034   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:27.838767   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:28.037949   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:28.315914   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:28.346567   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:28.539119   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:28.808853   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:28.838437   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:29.036989   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:29.308729   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:29.339702   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:29.537814   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:29.808942   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:29.841777   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:30.038084   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:30.307636   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:30.339110   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:30.538667   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:30.808685   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:30.838911   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:31.037786   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:31.309187   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:31.338193   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:31.538062   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:31.810154   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:31.844570   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:32.036891   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:32.309059   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:32.338920   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:32.538629   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:32.811819   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:32.840003   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:33.298376   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:33.314136   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:33.405537   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:33.536782   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:33.810211   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:33.838557   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:34.038758   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:34.308572   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:34.338993   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:34.538664   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:34.809265   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:34.838824   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:35.038820   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:35.309811   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:35.338667   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:35.538473   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:35.809185   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:35.840427   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:36.037848   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:36.309172   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:36.344741   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:36.537522   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:36.815421   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:36.846933   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:37.038118   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:37.307913   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:37.339870   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:37.545907   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:37.809630   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:37.838804   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:38.036948   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:38.319878   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:38.342775   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:38.537998   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:38.809824   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:38.915083   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:39.041765   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:39.309331   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:39.342044   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:39.537640   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:39.808078   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:39.838346   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:40.036732   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:40.309104   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:40.338364   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:40.544312   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:40.808442   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:40.909737   15584 kapi.go:107] duration metric: took 1m5.075684221s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0930 19:40:41.037117   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:41.307717   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:41.538444   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:41.808544   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:42.037764   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:42.308953   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:42.538432   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:42.808497   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:43.038173   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:43.309165   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:43.537280   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:43.808012   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:44.037523   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:44.308211   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:45.043029   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:45.043273   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:45.047140   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:45.308014   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:45.537537   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:45.808735   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:46.037888   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:46.309235   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:46.537513   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:46.808314   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:47.038548   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:47.308644   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:47.538083   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:47.807931   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:48.038183   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:48.308144   15584 kapi.go:107] duration metric: took 1m14.004175846s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0930 19:40:48.538107   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:49.038498   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:49.537789   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:50.038155   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:50.613944   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:51.038032   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:51.537506   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:52.040616   15584 kapi.go:107] duration metric: took 1m14.506956805s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0930 19:40:52.041976   15584 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-857381 cluster.
	I0930 19:40:52.043243   15584 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0930 19:40:52.044410   15584 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0930 19:40:52.045758   15584 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0930 19:40:52.046831   15584 addons.go:510] duration metric: took 1m26.177460547s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0930 19:40:52.046869   15584 start.go:246] waiting for cluster config update ...
	I0930 19:40:52.046883   15584 start.go:255] writing updated cluster config ...
	I0930 19:40:52.047117   15584 ssh_runner.go:195] Run: rm -f paused
	I0930 19:40:52.098683   15584 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 19:40:52.100271   15584 out.go:177] * Done! kubectl is now configured to use "addons-857381" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.362407432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727725962362378576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c58238d5-2789-4b37-a3db-51ce46d51344 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.363013800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1614eba3-69ce-49cc-b040-a64e04b06ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.363091140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1614eba3-69ce-49cc-b040-a64e04b06ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.363363585Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d877c2fce2507c1237bb109f59dd388cf7efbb0f76a1402526c779fe7140764,PodSandboxId:5f918ee4dd435117ab962a7aba5a72be46d9c77da93ecebd3656ecafc581b67e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727725955386657028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-g2hjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ba8083f-a0ac-459b-8296-63da132aaac1,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379432eba48bb0fbab152d3f1013d9b37a95e87e739158fb313fa0b78ff8e264,PodSandboxId:7186dc43443428dc9dd097d0de0b6842c2db7d0aa646939da7dfdcaa6c1fd4a9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727725814733655838,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b659e53f-9c5e-499b-b386-a5be26a79083,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef,PodSandboxId:2927b71f84ff3f76f3a52a1aecbd72a68cfa19e0cdca879f3210c117c839294f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727725251528262837,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-scvnm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 5e438281-5451-4290-8c50-14fb79a66185,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46f33863b6d216c85078337b5eefc34ba3141590e24ec8b9dfbb21d10595b84e,PodSandboxId:3e88376f8e4f3c5da30623befddc798d3597e97f13199051087ad81a73199883,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226573713791,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-cgdc6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 81717421-6023-4cfb-acff-733a7ea02838,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831ffd5c60190ad65b735f6a1c699bb486f24c54379a56cc2a077aac0eb4c325,PodSandboxId:f002aa1c3285a2c33f423dfce6f5f97d16dbd6ad2adcb4888ad0d38d814ac293,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226432061841,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qv7n8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8162826e-db14-46b9-93f2-456169ccfb0d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013,PodSandboxId:5d866c50845926549f01df87a9908307213fc5caa20603d75bdd4c898c23d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172772
5209633050557,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cdn25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b344652c-decb-4b68-9eb4-dd034008cf98,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab,PodSandboxId:44e738ed93b01a10a8ff2fe7b585def59079d101143e4555486329cd7fcc73b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727725171524308003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf253e6d-52dd-4bbf-a505-61269b1bb4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec,PodSandboxId:7264dffbc56c756580b1699b46a98d026060043f7ded85528176c4468f3e54d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727725169673865152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2sl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ef3332d-3ee7-4d76-bbef-2dfc99673515,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a,PodSandboxId:cbd8bbc0b830527874fdbef734642c050e7e6a62986ee8cdf383f82424b3b1c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727725167873622399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wgjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2646cb6-ecf8-4e44-9d48-b49eead7d727,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67,PodSandboxId:472730560a69cb865a7de097b81e5d7c46896bf3dfef03d491afa5c9add05b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915
af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727725156408359954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509234ffc60223733ef52b2009dbce73,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb,PodSandboxId:45990caa9ec749761565324cc3ffda13e0181f617a83701013fa0c2c91467ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea630022894
16a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727725156391153567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462c1efc125130690ce0abe7c0d6a433,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377,PodSandboxId:f599de907322667aeed83b2705fea682b338d49da5ee13de1790e02e7e4e8a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727725156395714900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c22ddcce59702bad76d277171c4f1a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e,PodSandboxId:329303fea433cc4c43cb1ec6a4a7d52fafbb483b77613fefca8466b49fcac7b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727725156374738044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aaf74d96d0249f06846b94c74ecc9cd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1614eba3-69ce-49cc-b040-a64e04b06ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.408205843Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85d91540-da7d-4165-b166-7635e0957d6f name=/runtime.v1.RuntimeService/Version
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.408303130Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85d91540-da7d-4165-b166-7635e0957d6f name=/runtime.v1.RuntimeService/Version
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.409808265Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75faf493-e223-4183-89a6-e88345a35977 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.410886988Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727725962410858832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75faf493-e223-4183-89a6-e88345a35977 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.411332255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ecf9b979-5789-4f2c-bf15-8fb10f684b7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.411394432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ecf9b979-5789-4f2c-bf15-8fb10f684b7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.411839768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d877c2fce2507c1237bb109f59dd388cf7efbb0f76a1402526c779fe7140764,PodSandboxId:5f918ee4dd435117ab962a7aba5a72be46d9c77da93ecebd3656ecafc581b67e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727725955386657028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-g2hjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ba8083f-a0ac-459b-8296-63da132aaac1,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379432eba48bb0fbab152d3f1013d9b37a95e87e739158fb313fa0b78ff8e264,PodSandboxId:7186dc43443428dc9dd097d0de0b6842c2db7d0aa646939da7dfdcaa6c1fd4a9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727725814733655838,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b659e53f-9c5e-499b-b386-a5be26a79083,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef,PodSandboxId:2927b71f84ff3f76f3a52a1aecbd72a68cfa19e0cdca879f3210c117c839294f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727725251528262837,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-scvnm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 5e438281-5451-4290-8c50-14fb79a66185,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46f33863b6d216c85078337b5eefc34ba3141590e24ec8b9dfbb21d10595b84e,PodSandboxId:3e88376f8e4f3c5da30623befddc798d3597e97f13199051087ad81a73199883,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226573713791,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-cgdc6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 81717421-6023-4cfb-acff-733a7ea02838,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831ffd5c60190ad65b735f6a1c699bb486f24c54379a56cc2a077aac0eb4c325,PodSandboxId:f002aa1c3285a2c33f423dfce6f5f97d16dbd6ad2adcb4888ad0d38d814ac293,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226432061841,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qv7n8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8162826e-db14-46b9-93f2-456169ccfb0d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013,PodSandboxId:5d866c50845926549f01df87a9908307213fc5caa20603d75bdd4c898c23d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172772
5209633050557,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cdn25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b344652c-decb-4b68-9eb4-dd034008cf98,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab,PodSandboxId:44e738ed93b01a10a8ff2fe7b585def59079d101143e4555486329cd7fcc73b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727725171524308003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf253e6d-52dd-4bbf-a505-61269b1bb4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec,PodSandboxId:7264dffbc56c756580b1699b46a98d026060043f7ded85528176c4468f3e54d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727725169673865152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2sl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ef3332d-3ee7-4d76-bbef-2dfc99673515,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a,PodSandboxId:cbd8bbc0b830527874fdbef734642c050e7e6a62986ee8cdf383f82424b3b1c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727725167873622399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wgjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2646cb6-ecf8-4e44-9d48-b49eead7d727,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67,PodSandboxId:472730560a69cb865a7de097b81e5d7c46896bf3dfef03d491afa5c9add05b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915
af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727725156408359954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509234ffc60223733ef52b2009dbce73,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb,PodSandboxId:45990caa9ec749761565324cc3ffda13e0181f617a83701013fa0c2c91467ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea630022894
16a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727725156391153567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462c1efc125130690ce0abe7c0d6a433,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377,PodSandboxId:f599de907322667aeed83b2705fea682b338d49da5ee13de1790e02e7e4e8a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727725156395714900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c22ddcce59702bad76d277171c4f1a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e,PodSandboxId:329303fea433cc4c43cb1ec6a4a7d52fafbb483b77613fefca8466b49fcac7b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727725156374738044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aaf74d96d0249f06846b94c74ecc9cd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ecf9b979-5789-4f2c-bf15-8fb10f684b7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.448985850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3fc58f6e-ff79-4d2a-81d0-81f6e081385b name=/runtime.v1.RuntimeService/Version
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.449073140Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3fc58f6e-ff79-4d2a-81d0-81f6e081385b name=/runtime.v1.RuntimeService/Version
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.450553749Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e05beea-628c-49cd-8cc9-062c57d2fd28 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.451788333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727725962451759860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e05beea-628c-49cd-8cc9-062c57d2fd28 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.452387175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=421e5955-8100-41cb-a115-63f428c33fc1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.452501947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=421e5955-8100-41cb-a115-63f428c33fc1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.452790901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d877c2fce2507c1237bb109f59dd388cf7efbb0f76a1402526c779fe7140764,PodSandboxId:5f918ee4dd435117ab962a7aba5a72be46d9c77da93ecebd3656ecafc581b67e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727725955386657028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-g2hjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ba8083f-a0ac-459b-8296-63da132aaac1,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379432eba48bb0fbab152d3f1013d9b37a95e87e739158fb313fa0b78ff8e264,PodSandboxId:7186dc43443428dc9dd097d0de0b6842c2db7d0aa646939da7dfdcaa6c1fd4a9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727725814733655838,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b659e53f-9c5e-499b-b386-a5be26a79083,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef,PodSandboxId:2927b71f84ff3f76f3a52a1aecbd72a68cfa19e0cdca879f3210c117c839294f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727725251528262837,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-scvnm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 5e438281-5451-4290-8c50-14fb79a66185,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46f33863b6d216c85078337b5eefc34ba3141590e24ec8b9dfbb21d10595b84e,PodSandboxId:3e88376f8e4f3c5da30623befddc798d3597e97f13199051087ad81a73199883,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226573713791,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-cgdc6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 81717421-6023-4cfb-acff-733a7ea02838,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831ffd5c60190ad65b735f6a1c699bb486f24c54379a56cc2a077aac0eb4c325,PodSandboxId:f002aa1c3285a2c33f423dfce6f5f97d16dbd6ad2adcb4888ad0d38d814ac293,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226432061841,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qv7n8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8162826e-db14-46b9-93f2-456169ccfb0d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013,PodSandboxId:5d866c50845926549f01df87a9908307213fc5caa20603d75bdd4c898c23d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172772
5209633050557,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cdn25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b344652c-decb-4b68-9eb4-dd034008cf98,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab,PodSandboxId:44e738ed93b01a10a8ff2fe7b585def59079d101143e4555486329cd7fcc73b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727725171524308003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf253e6d-52dd-4bbf-a505-61269b1bb4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec,PodSandboxId:7264dffbc56c756580b1699b46a98d026060043f7ded85528176c4468f3e54d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727725169673865152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2sl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ef3332d-3ee7-4d76-bbef-2dfc99673515,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a,PodSandboxId:cbd8bbc0b830527874fdbef734642c050e7e6a62986ee8cdf383f82424b3b1c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727725167873622399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wgjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2646cb6-ecf8-4e44-9d48-b49eead7d727,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67,PodSandboxId:472730560a69cb865a7de097b81e5d7c46896bf3dfef03d491afa5c9add05b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915
af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727725156408359954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509234ffc60223733ef52b2009dbce73,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb,PodSandboxId:45990caa9ec749761565324cc3ffda13e0181f617a83701013fa0c2c91467ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea630022894
16a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727725156391153567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462c1efc125130690ce0abe7c0d6a433,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377,PodSandboxId:f599de907322667aeed83b2705fea682b338d49da5ee13de1790e02e7e4e8a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727725156395714900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c22ddcce59702bad76d277171c4f1a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e,PodSandboxId:329303fea433cc4c43cb1ec6a4a7d52fafbb483b77613fefca8466b49fcac7b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727725156374738044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aaf74d96d0249f06846b94c74ecc9cd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=421e5955-8100-41cb-a115-63f428c33fc1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.493133748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37512e09-364c-451b-9de4-9938bc7c735e name=/runtime.v1.RuntimeService/Version
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.493223911Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37512e09-364c-451b-9de4-9938bc7c735e name=/runtime.v1.RuntimeService/Version
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.494415256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e63c43ea-3f61-4d90-bb60-336e0fe0f2fa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.495670885Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727725962495645288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e63c43ea-3f61-4d90-bb60-336e0fe0f2fa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.496305958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=298b2eeb-37ed-4ffb-a0d6-e15bf0cb3ab0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.496359065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=298b2eeb-37ed-4ffb-a0d6-e15bf0cb3ab0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:52:42 addons-857381 crio[658]: time="2024-09-30 19:52:42.496670491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d877c2fce2507c1237bb109f59dd388cf7efbb0f76a1402526c779fe7140764,PodSandboxId:5f918ee4dd435117ab962a7aba5a72be46d9c77da93ecebd3656ecafc581b67e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727725955386657028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-g2hjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ba8083f-a0ac-459b-8296-63da132aaac1,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379432eba48bb0fbab152d3f1013d9b37a95e87e739158fb313fa0b78ff8e264,PodSandboxId:7186dc43443428dc9dd097d0de0b6842c2db7d0aa646939da7dfdcaa6c1fd4a9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727725814733655838,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b659e53f-9c5e-499b-b386-a5be26a79083,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef,PodSandboxId:2927b71f84ff3f76f3a52a1aecbd72a68cfa19e0cdca879f3210c117c839294f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727725251528262837,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-scvnm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 5e438281-5451-4290-8c50-14fb79a66185,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46f33863b6d216c85078337b5eefc34ba3141590e24ec8b9dfbb21d10595b84e,PodSandboxId:3e88376f8e4f3c5da30623befddc798d3597e97f13199051087ad81a73199883,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226573713791,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-cgdc6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 81717421-6023-4cfb-acff-733a7ea02838,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831ffd5c60190ad65b735f6a1c699bb486f24c54379a56cc2a077aac0eb4c325,PodSandboxId:f002aa1c3285a2c33f423dfce6f5f97d16dbd6ad2adcb4888ad0d38d814ac293,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727725226432061841,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qv7n8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8162826e-db14-46b9-93f2-456169ccfb0d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013,PodSandboxId:5d866c50845926549f01df87a9908307213fc5caa20603d75bdd4c898c23d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172772
5209633050557,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cdn25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b344652c-decb-4b68-9eb4-dd034008cf98,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab,PodSandboxId:44e738ed93b01a10a8ff2fe7b585def59079d101143e4555486329cd7fcc73b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727725171524308003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf253e6d-52dd-4bbf-a505-61269b1bb4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec,PodSandboxId:7264dffbc56c756580b1699b46a98d026060043f7ded85528176c4468f3e54d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727725169673865152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2sl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ef3332d-3ee7-4d76-bbef-2dfc99673515,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a,PodSandboxId:cbd8bbc0b830527874fdbef734642c050e7e6a62986ee8cdf383f82424b3b1c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727725167873622399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wgjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2646cb6-ecf8-4e44-9d48-b49eead7d727,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67,PodSandboxId:472730560a69cb865a7de097b81e5d7c46896bf3dfef03d491afa5c9add05b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915
af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727725156408359954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509234ffc60223733ef52b2009dbce73,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb,PodSandboxId:45990caa9ec749761565324cc3ffda13e0181f617a83701013fa0c2c91467ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea630022894
16a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727725156391153567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462c1efc125130690ce0abe7c0d6a433,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377,PodSandboxId:f599de907322667aeed83b2705fea682b338d49da5ee13de1790e02e7e4e8a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727725156395714900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c22ddcce59702bad76d277171c4f1a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e,PodSandboxId:329303fea433cc4c43cb1ec6a4a7d52fafbb483b77613fefca8466b49fcac7b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727725156374738044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aaf74d96d0249f06846b94c74ecc9cd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=298b2eeb-37ed-4ffb-a0d6-e15bf0cb3ab0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3d877c2fce250       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   5f918ee4dd435       hello-world-app-55bf9c44b4-g2hjs
	379432eba48bb       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   7186dc4344342       nginx
	0a550a25e9f7b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   2927b71f84ff3       gcp-auth-89d5ffd79-scvnm
	46f33863b6d21       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              patch                     0                   3e88376f8e4f3       ingress-nginx-admission-patch-cgdc6
	831ffd5c60190       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   f002aa1c3285a       ingress-nginx-admission-create-qv7n8
	c6b2eb356f364       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   5d866c5084592       metrics-server-84c5f94fbc-cdn25
	34fdddbc2729c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   44e738ed93b01       storage-provisioner
	8a2f669f59ff8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             13 minutes ago      Running             coredns                   0                   7264dffbc56c7       coredns-7c65d6cfc9-v2sl5
	cd4a5712da231       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   cbd8bbc0b8305       kube-proxy-wgjdg
	611b55895a7c3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   472730560a69c       etcd-addons-857381
	f054c208a5bd0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   f599de9073226       kube-controller-manager-addons-857381
	0f613c2d90480       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   45990caa9ec74       kube-scheduler-addons-857381
	e6ba6b23751a3       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   329303fea433c       kube-apiserver-addons-857381
	
	
	==> coredns [8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec] <==
	[INFO] 127.0.0.1:57266 - 46113 "HINFO IN 4563711597832070733.7464152516972830378. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012863189s
	[INFO] 10.244.0.7:41266 - 20553 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000327187s
	[INFO] 10.244.0.7:41266 - 47123 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.0007627s
	[INFO] 10.244.0.7:41266 - 44256 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000120493s
	[INFO] 10.244.0.7:41266 - 8839 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000082266s
	[INFO] 10.244.0.7:41266 - 45651 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000085479s
	[INFO] 10.244.0.7:41266 - 55882 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000231828s
	[INFO] 10.244.0.7:41266 - 16528 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000127235s
	[INFO] 10.244.0.7:41266 - 22884 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000079062s
	[INFO] 10.244.0.7:58608 - 46632 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093178s
	[INFO] 10.244.0.7:58608 - 46894 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000048081s
	[INFO] 10.244.0.7:53470 - 3911 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066274s
	[INFO] 10.244.0.7:53470 - 3656 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000054504s
	[INFO] 10.244.0.7:34130 - 26559 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059796s
	[INFO] 10.244.0.7:34130 - 26354 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043427s
	[INFO] 10.244.0.7:40637 - 48484 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000044485s
	[INFO] 10.244.0.7:40637 - 48313 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050997s
	[INFO] 10.244.0.21:43040 - 43581 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00046625s
	[INFO] 10.244.0.21:55023 - 19308 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000074371s
	[INFO] 10.244.0.21:45685 - 26448 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122686s
	[INFO] 10.244.0.21:43520 - 19830 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000076449s
	[INFO] 10.244.0.21:37619 - 36517 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132562s
	[INFO] 10.244.0.21:43029 - 472 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000279272s
	[INFO] 10.244.0.21:58516 - 17196 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002205188s
	[INFO] 10.244.0.21:42990 - 49732 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002642341s
	
	
	==> describe nodes <==
	Name:               addons-857381
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-857381
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=addons-857381
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T19_39_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-857381
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 19:39:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-857381
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 19:52:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 19:50:25 +0000   Mon, 30 Sep 2024 19:39:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 19:50:25 +0000   Mon, 30 Sep 2024 19:39:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 19:50:25 +0000   Mon, 30 Sep 2024 19:39:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 19:50:25 +0000   Mon, 30 Sep 2024 19:39:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    addons-857381
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 25d9982bd002458384094f49961bbdf8
	  System UUID:                25d9982b-d002-4583-8409-4f49961bbdf8
	  Boot ID:                    b5f01af6-3227-4822-ba41-5ad95d8a7eaf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-g2hjs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  gcp-auth                    gcp-auth-89d5ffd79-scvnm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-v2sl5                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-857381                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-857381             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-857381    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-wgjdg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-857381             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-cdn25          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-857381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-857381 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-857381 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m   kubelet          Node addons-857381 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node addons-857381 event: Registered Node addons-857381 in Controller
	
	
	==> dmesg <==
	[  +0.986942] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.027017] kauditd_printk_skb: 123 callbacks suppressed
	[  +5.125991] kauditd_printk_skb: 110 callbacks suppressed
	[ +10.689942] kauditd_printk_skb: 62 callbacks suppressed
	[Sep30 19:40] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.674801] kauditd_printk_skb: 24 callbacks suppressed
	[ +12.773296] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.640929] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.302122] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.224814] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.475445] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.472390] kauditd_printk_skb: 6 callbacks suppressed
	[Sep30 19:41] kauditd_printk_skb: 6 callbacks suppressed
	[Sep30 19:49] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.016133] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.573920] kauditd_printk_skb: 13 callbacks suppressed
	[ +17.576553] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.137626] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.049333] kauditd_printk_skb: 15 callbacks suppressed
	[  +9.481275] kauditd_printk_skb: 64 callbacks suppressed
	[Sep30 19:50] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.792200] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.011966] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.478236] kauditd_printk_skb: 3 callbacks suppressed
	[Sep30 19:52] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67] <==
	{"level":"info","ts":"2024-09-30T19:40:33.282237Z","caller":"traceutil/trace.go:171","msg":"trace[1440135749] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1021; }","duration":"203.229379ms","start":"2024-09-30T19:40:33.079003Z","end":"2024-09-30T19:40:33.282232Z","steps":["trace[1440135749] 'agreement among raft nodes before linearized reading'  (duration: 203.162702ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T19:40:41.965188Z","caller":"traceutil/trace.go:171","msg":"trace[63557113] transaction","detail":"{read_only:false; response_revision:1075; number_of_response:1; }","duration":"103.358472ms","start":"2024-09-30T19:40:41.861805Z","end":"2024-09-30T19:40:41.965164Z","steps":["trace[63557113] 'process raft request'  (duration: 103.177417ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.024163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.653381ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.024367Z","caller":"traceutil/trace.go:171","msg":"trace[665645630] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1080; }","duration":"268.162297ms","start":"2024-09-30T19:40:44.756192Z","end":"2024-09-30T19:40:45.024355Z","steps":["trace[665645630] 'range keys from in-memory index tree'  (duration: 267.637437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.024639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.464576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.024814Z","caller":"traceutil/trace.go:171","msg":"trace[1197247651] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1080; }","duration":"228.698131ms","start":"2024-09-30T19:40:44.795971Z","end":"2024-09-30T19:40:45.024669Z","steps":["trace[1197247651] 'range keys from in-memory index tree'  (duration: 228.42242ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.024764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"509.83424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.024932Z","caller":"traceutil/trace.go:171","msg":"trace[1982350029] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1080; }","duration":"510.003594ms","start":"2024-09-30T19:40:44.514921Z","end":"2024-09-30T19:40:45.024925Z","steps":["trace[1982350029] 'count revisions from in-memory index tree'  (duration: 509.784533ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.024960Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T19:40:44.514884Z","time spent":"510.067329ms","remote":"127.0.0.1:40802","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true "}
	{"level":"warn","ts":"2024-09-30T19:40:45.025722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.967655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.025980Z","caller":"traceutil/trace.go:171","msg":"trace[1921436978] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:1080; }","duration":"103.205459ms","start":"2024-09-30T19:40:44.922740Z","end":"2024-09-30T19:40:45.025946Z","steps":["trace[1921436978] 'count revisions from in-memory index tree'  (duration: 102.824591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.027664Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"503.881417ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.027743Z","caller":"traceutil/trace.go:171","msg":"trace[1637850638] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1080; }","duration":"503.963128ms","start":"2024-09-30T19:40:44.523772Z","end":"2024-09-30T19:40:45.027735Z","steps":["trace[1637850638] 'range keys from in-memory index tree'  (duration: 503.748159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.027813Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T19:40:44.523734Z","time spent":"504.023771ms","remote":"127.0.0.1:40756","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-30T19:49:17.549046Z","caller":"traceutil/trace.go:171","msg":"trace[477247537] linearizableReadLoop","detail":"{readStateIndex:2110; appliedIndex:2109; }","duration":"332.343416ms","start":"2024-09-30T19:49:17.216678Z","end":"2024-09-30T19:49:17.549021Z","steps":["trace[477247537] 'read index received'  (duration: 332.162445ms)","trace[477247537] 'applied index is now lower than readState.Index'  (duration: 180.324µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T19:49:17.549230Z","caller":"traceutil/trace.go:171","msg":"trace[487588167] transaction","detail":"{read_only:false; response_revision:1964; number_of_response:1; }","duration":"416.883354ms","start":"2024-09-30T19:49:17.132337Z","end":"2024-09-30T19:49:17.549220Z","steps":["trace[487588167] 'process raft request'  (duration: 416.547999ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:49:17.549391Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T19:49:17.132321Z","time spent":"416.931927ms","remote":"127.0.0.1:40530","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":25,"response count":0,"response size":39,"request content":"compare:<key:\"compact_rev_key\" version:1 > success:<request_put:<key:\"compact_rev_key\" value_size:4 >> failure:<request_range:<key:\"compact_rev_key\" > >"}
	{"level":"warn","ts":"2024-09-30T19:49:17.549718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.534401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:49:17.550030Z","caller":"traceutil/trace.go:171","msg":"trace[1640880640] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1964; }","duration":"189.856846ms","start":"2024-09-30T19:49:17.360156Z","end":"2024-09-30T19:49:17.550013Z","steps":["trace[1640880640] 'agreement among raft nodes before linearized reading'  (duration: 189.426494ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:49:17.549806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.130902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/headlamp\" ","response":"range_response_count:1 size:596"}
	{"level":"info","ts":"2024-09-30T19:49:17.550375Z","caller":"traceutil/trace.go:171","msg":"trace[30748080] range","detail":"{range_begin:/registry/namespaces/headlamp; range_end:; response_count:1; response_revision:1964; }","duration":"333.699236ms","start":"2024-09-30T19:49:17.216666Z","end":"2024-09-30T19:49:17.550366Z","steps":["trace[30748080] 'agreement among raft nodes before linearized reading'  (duration: 333.066743ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:49:17.550527Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T19:49:17.216625Z","time spent":"333.888235ms","remote":"127.0.0.1:40674","response type":"/etcdserverpb.KV/Range","request count":0,"request size":31,"response count":1,"response size":619,"request content":"key:\"/registry/namespaces/headlamp\" "}
	{"level":"info","ts":"2024-09-30T19:49:17.560569Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1467}
	{"level":"info","ts":"2024-09-30T19:49:17.674282Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1467,"took":"113.151678ms","hash":2336021825,"current-db-size-bytes":6635520,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3395584,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-30T19:49:17.674799Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2336021825,"revision":1467,"compact-revision":-1}
	
	
	==> gcp-auth [0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef] <==
	2024/09/30 19:40:55 Ready to write response ...
	2024/09/30 19:40:55 Ready to marshal response ...
	2024/09/30 19:40:55 Ready to write response ...
	2024/09/30 19:48:58 Ready to marshal response ...
	2024/09/30 19:48:58 Ready to write response ...
	2024/09/30 19:48:58 Ready to marshal response ...
	2024/09/30 19:48:58 Ready to write response ...
	2024/09/30 19:48:58 Ready to marshal response ...
	2024/09/30 19:48:58 Ready to write response ...
	2024/09/30 19:49:08 Ready to marshal response ...
	2024/09/30 19:49:08 Ready to write response ...
	2024/09/30 19:49:10 Ready to marshal response ...
	2024/09/30 19:49:10 Ready to write response ...
	2024/09/30 19:49:35 Ready to marshal response ...
	2024/09/30 19:49:35 Ready to write response ...
	2024/09/30 19:49:35 Ready to marshal response ...
	2024/09/30 19:49:35 Ready to write response ...
	2024/09/30 19:49:38 Ready to marshal response ...
	2024/09/30 19:49:38 Ready to write response ...
	2024/09/30 19:49:48 Ready to marshal response ...
	2024/09/30 19:49:48 Ready to write response ...
	2024/09/30 19:50:12 Ready to marshal response ...
	2024/09/30 19:50:12 Ready to write response ...
	2024/09/30 19:52:32 Ready to marshal response ...
	2024/09/30 19:52:32 Ready to write response ...
	
	
	==> kernel <==
	 19:52:42 up 13 min,  0 users,  load average: 0.23, 0.55, 0.46
	Linux addons-857381 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e] <==
	E0930 19:50:00.209878       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:01.218394       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:02.236227       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:03.254865       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:04.268899       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:04.476888       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:05.276046       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:06.287284       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:07.295293       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:08.316036       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0930 19:50:08.612418       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0930 19:50:09.323889       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	W0930 19:50:09.653652       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0930 19:50:10.334217       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:11.343087       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0930 19:50:12.081198       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0930 19:50:12.262496       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.189.44"}
	E0930 19:50:12.352591       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:13.361386       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:14.369899       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:15.377579       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:16.384628       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:17.392881       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:18.400366       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0930 19:52:32.708585       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.184.74"}
	
	
	==> kube-controller-manager [f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377] <==
	W0930 19:51:24.330935       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:51:24.331101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:51:27.102250       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:51:27.102290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:51:56.974754       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:51:56.974812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:51:57.842109       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:51:57.842309       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:52:02.696051       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:52:02.696213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:52:04.478781       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:52:04.478835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:52:28.220328       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:52:28.220433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 19:52:32.532287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="65.328588ms"
	I0930 19:52:32.546190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.703013ms"
	I0930 19:52:32.568582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="22.331933ms"
	I0930 19:52:32.568712       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="65.874µs"
	W0930 19:52:34.038351       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:52:34.038400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 19:52:34.464602       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0930 19:52:34.471094       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="8.574µs"
	I0930 19:52:34.479024       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0930 19:52:35.834624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.755927ms"
	I0930 19:52:35.834865       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="69.612µs"
	
	
	==> kube-proxy [cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 19:39:29.990587       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 19:39:30.058676       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	E0930 19:39:30.058750       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 19:39:30.362730       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 19:39:30.362795       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 19:39:30.362820       1 server_linux.go:169] "Using iptables Proxier"
	I0930 19:39:30.416095       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 19:39:30.416411       1 server.go:483] "Version info" version="v1.31.1"
	I0930 19:39:30.416479       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 19:39:30.470892       1 config.go:199] "Starting service config controller"
	I0930 19:39:30.470932       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 19:39:30.470961       1 config.go:105] "Starting endpoint slice config controller"
	I0930 19:39:30.470965       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 19:39:30.471620       1 config.go:328] "Starting node config controller"
	I0930 19:39:30.471641       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 19:39:30.571571       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 19:39:30.571587       1 shared_informer.go:320] Caches are synced for service config
	I0930 19:39:30.573064       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb] <==
	E0930 19:39:18.783718       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0930 19:39:18.783738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:18.783806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 19:39:18.783818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.639835       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 19:39:19.639943       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 19:39:19.654740       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 19:39:19.654792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.667324       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 19:39:19.667422       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.774980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 19:39:19.775022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.818960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 19:39:19.819059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.876197       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 19:39:19.876273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.888046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 19:39:19.888095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.898349       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 19:39:19.898413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.915746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 19:39:19.915953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:20.008659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0930 19:39:20.008707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0930 19:39:21.870985       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 19:52:32 addons-857381 kubelet[1195]: I0930 19:52:32.614329    1195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6ba8083f-a0ac-459b-8296-63da132aaac1-gcp-creds\") pod \"hello-world-app-55bf9c44b4-g2hjs\" (UID: \"6ba8083f-a0ac-459b-8296-63da132aaac1\") " pod="default/hello-world-app-55bf9c44b4-g2hjs"
	Sep 30 19:52:33 addons-857381 kubelet[1195]: I0930 19:52:33.720547    1195 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rltj\" (UniqueName: \"kubernetes.io/projected/e1217c30-4e9c-43fa-a3f6-0a640781c5f8-kube-api-access-7rltj\") pod \"e1217c30-4e9c-43fa-a3f6-0a640781c5f8\" (UID: \"e1217c30-4e9c-43fa-a3f6-0a640781c5f8\") "
	Sep 30 19:52:33 addons-857381 kubelet[1195]: I0930 19:52:33.723551    1195 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1217c30-4e9c-43fa-a3f6-0a640781c5f8-kube-api-access-7rltj" (OuterVolumeSpecName: "kube-api-access-7rltj") pod "e1217c30-4e9c-43fa-a3f6-0a640781c5f8" (UID: "e1217c30-4e9c-43fa-a3f6-0a640781c5f8"). InnerVolumeSpecName "kube-api-access-7rltj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 19:52:33 addons-857381 kubelet[1195]: I0930 19:52:33.798591    1195 scope.go:117] "RemoveContainer" containerID="fbbc7c85eaec24fb4d15cf79a7766331aec956ce9799202bccf45c4baadd4428"
	Sep 30 19:52:33 addons-857381 kubelet[1195]: I0930 19:52:33.821841    1195 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7rltj\" (UniqueName: \"kubernetes.io/projected/e1217c30-4e9c-43fa-a3f6-0a640781c5f8-kube-api-access-7rltj\") on node \"addons-857381\" DevicePath \"\""
	Sep 30 19:52:33 addons-857381 kubelet[1195]: I0930 19:52:33.836252    1195 scope.go:117] "RemoveContainer" containerID="fbbc7c85eaec24fb4d15cf79a7766331aec956ce9799202bccf45c4baadd4428"
	Sep 30 19:52:33 addons-857381 kubelet[1195]: E0930 19:52:33.837152    1195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbbc7c85eaec24fb4d15cf79a7766331aec956ce9799202bccf45c4baadd4428\": container with ID starting with fbbc7c85eaec24fb4d15cf79a7766331aec956ce9799202bccf45c4baadd4428 not found: ID does not exist" containerID="fbbc7c85eaec24fb4d15cf79a7766331aec956ce9799202bccf45c4baadd4428"
	Sep 30 19:52:33 addons-857381 kubelet[1195]: I0930 19:52:33.837198    1195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbbc7c85eaec24fb4d15cf79a7766331aec956ce9799202bccf45c4baadd4428"} err="failed to get container status \"fbbc7c85eaec24fb4d15cf79a7766331aec956ce9799202bccf45c4baadd4428\": rpc error: code = NotFound desc = could not find container \"fbbc7c85eaec24fb4d15cf79a7766331aec956ce9799202bccf45c4baadd4428\": container with ID starting with fbbc7c85eaec24fb4d15cf79a7766331aec956ce9799202bccf45c4baadd4428 not found: ID does not exist"
	Sep 30 19:52:35 addons-857381 kubelet[1195]: I0930 19:52:35.407043    1195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8162826e-db14-46b9-93f2-456169ccfb0d" path="/var/lib/kubelet/pods/8162826e-db14-46b9-93f2-456169ccfb0d/volumes"
	Sep 30 19:52:35 addons-857381 kubelet[1195]: I0930 19:52:35.407923    1195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81717421-6023-4cfb-acff-733a7ea02838" path="/var/lib/kubelet/pods/81717421-6023-4cfb-acff-733a7ea02838/volumes"
	Sep 30 19:52:35 addons-857381 kubelet[1195]: I0930 19:52:35.408393    1195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1217c30-4e9c-43fa-a3f6-0a640781c5f8" path="/var/lib/kubelet/pods/e1217c30-4e9c-43fa-a3f6-0a640781c5f8/volumes"
	Sep 30 19:52:37 addons-857381 kubelet[1195]: I0930 19:52:37.748881    1195 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4cb4\" (UniqueName: \"kubernetes.io/projected/f16cd6ff-05a8-47e5-963e-ef20ce165eeb-kube-api-access-d4cb4\") pod \"f16cd6ff-05a8-47e5-963e-ef20ce165eeb\" (UID: \"f16cd6ff-05a8-47e5-963e-ef20ce165eeb\") "
	Sep 30 19:52:37 addons-857381 kubelet[1195]: I0930 19:52:37.748949    1195 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f16cd6ff-05a8-47e5-963e-ef20ce165eeb-webhook-cert\") pod \"f16cd6ff-05a8-47e5-963e-ef20ce165eeb\" (UID: \"f16cd6ff-05a8-47e5-963e-ef20ce165eeb\") "
	Sep 30 19:52:37 addons-857381 kubelet[1195]: I0930 19:52:37.750910    1195 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f16cd6ff-05a8-47e5-963e-ef20ce165eeb-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f16cd6ff-05a8-47e5-963e-ef20ce165eeb" (UID: "f16cd6ff-05a8-47e5-963e-ef20ce165eeb"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 30 19:52:37 addons-857381 kubelet[1195]: I0930 19:52:37.751893    1195 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f16cd6ff-05a8-47e5-963e-ef20ce165eeb-kube-api-access-d4cb4" (OuterVolumeSpecName: "kube-api-access-d4cb4") pod "f16cd6ff-05a8-47e5-963e-ef20ce165eeb" (UID: "f16cd6ff-05a8-47e5-963e-ef20ce165eeb"). InnerVolumeSpecName "kube-api-access-d4cb4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 19:52:37 addons-857381 kubelet[1195]: I0930 19:52:37.825385    1195 scope.go:117] "RemoveContainer" containerID="6a2d0f08874d9e73873481108ad4b7c2ace12dbf72ff01f34def4fc1e5cfff5d"
	Sep 30 19:52:37 addons-857381 kubelet[1195]: I0930 19:52:37.843932    1195 scope.go:117] "RemoveContainer" containerID="6a2d0f08874d9e73873481108ad4b7c2ace12dbf72ff01f34def4fc1e5cfff5d"
	Sep 30 19:52:37 addons-857381 kubelet[1195]: E0930 19:52:37.844844    1195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a2d0f08874d9e73873481108ad4b7c2ace12dbf72ff01f34def4fc1e5cfff5d\": container with ID starting with 6a2d0f08874d9e73873481108ad4b7c2ace12dbf72ff01f34def4fc1e5cfff5d not found: ID does not exist" containerID="6a2d0f08874d9e73873481108ad4b7c2ace12dbf72ff01f34def4fc1e5cfff5d"
	Sep 30 19:52:37 addons-857381 kubelet[1195]: I0930 19:52:37.844895    1195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a2d0f08874d9e73873481108ad4b7c2ace12dbf72ff01f34def4fc1e5cfff5d"} err="failed to get container status \"6a2d0f08874d9e73873481108ad4b7c2ace12dbf72ff01f34def4fc1e5cfff5d\": rpc error: code = NotFound desc = could not find container \"6a2d0f08874d9e73873481108ad4b7c2ace12dbf72ff01f34def4fc1e5cfff5d\": container with ID starting with 6a2d0f08874d9e73873481108ad4b7c2ace12dbf72ff01f34def4fc1e5cfff5d not found: ID does not exist"
	Sep 30 19:52:37 addons-857381 kubelet[1195]: I0930 19:52:37.849510    1195 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-d4cb4\" (UniqueName: \"kubernetes.io/projected/f16cd6ff-05a8-47e5-963e-ef20ce165eeb-kube-api-access-d4cb4\") on node \"addons-857381\" DevicePath \"\""
	Sep 30 19:52:37 addons-857381 kubelet[1195]: I0930 19:52:37.849545    1195 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f16cd6ff-05a8-47e5-963e-ef20ce165eeb-webhook-cert\") on node \"addons-857381\" DevicePath \"\""
	Sep 30 19:52:39 addons-857381 kubelet[1195]: I0930 19:52:39.405888    1195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f16cd6ff-05a8-47e5-963e-ef20ce165eeb" path="/var/lib/kubelet/pods/f16cd6ff-05a8-47e5-963e-ef20ce165eeb/volumes"
	Sep 30 19:52:41 addons-857381 kubelet[1195]: E0930 19:52:41.403279    1195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7bbe2897-cb73-4ed6-a221-bebc8545e1cc"
	Sep 30 19:52:41 addons-857381 kubelet[1195]: E0930 19:52:41.774377    1195 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727725961774072185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 19:52:41 addons-857381 kubelet[1195]: E0930 19:52:41.774418    1195 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727725961774072185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab] <==
	I0930 19:39:33.155826       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 19:39:33.685414       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 19:39:33.685583       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 19:39:33.816356       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 19:39:33.824546       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-857381_08dcb125-dcae-41ac-b31f-3f836116afa4!
	I0930 19:39:33.844765       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c2244a99-76a6-4c70-8326-d7436fd22acb", APIVersion:"v1", ResourceVersion:"651", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-857381_08dcb125-dcae-41ac-b31f-3f836116afa4 became leader
	I0930 19:39:34.127903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-857381_08dcb125-dcae-41ac-b31f-3f836116afa4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-857381 -n addons-857381
helpers_test.go:261: (dbg) Run:  kubectl --context addons-857381 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-857381 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-857381 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-857381/192.168.39.16
	Start Time:       Mon, 30 Sep 2024 19:40:55 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5fk2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k5fk2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  11m                 default-scheduler  Successfully assigned default/busybox to addons-857381
	  Normal   Pulling    10m (x4 over 11m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)   kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 11m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    93s (x44 over 11m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (331.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.33948ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-cdn25" [b344652c-decb-4b68-9eb4-dd034008cf98] Running
I0930 19:48:57.828280   14875 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0930 19:48:57.828302   14875 kapi.go:107] duration metric: took 8.304072ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005026302s
addons_test.go:413: (dbg) Run:  kubectl --context addons-857381 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-857381 top pods -n kube-system: exit status 1 (77.497346ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-v2sl5, age: 9m36.904826975s

                                                
                                                
** /stderr **
I0930 19:49:02.906849   14875 retry.go:31] will retry after 2.66755055s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-857381 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-857381 top pods -n kube-system: exit status 1 (69.549288ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-v2sl5, age: 9m39.642792979s

                                                
                                                
** /stderr **
I0930 19:49:05.644783   14875 retry.go:31] will retry after 3.870435788s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-857381 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-857381 top pods -n kube-system: exit status 1 (79.564962ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-v2sl5, age: 9m43.594265721s

                                                
                                                
** /stderr **
I0930 19:49:09.595993   14875 retry.go:31] will retry after 5.522794822s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-857381 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-857381 top pods -n kube-system: exit status 1 (94.476515ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-v2sl5, age: 9m49.211943269s

                                                
                                                
** /stderr **
I0930 19:49:15.213667   14875 retry.go:31] will retry after 8.787375115s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-857381 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-857381 top pods -n kube-system: exit status 1 (65.282101ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-v2sl5, age: 9m58.065093985s

                                                
                                                
** /stderr **
I0930 19:49:24.066753   14875 retry.go:31] will retry after 20.360424823s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-857381 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-857381 top pods -n kube-system: exit status 1 (67.386556ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-v2sl5, age: 10m18.492961494s

                                                
                                                
** /stderr **
I0930 19:49:44.494871   14875 retry.go:31] will retry after 30.693684148s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-857381 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-857381 top pods -n kube-system: exit status 1 (72.433355ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-v2sl5, age: 10m49.259395039s

                                                
                                                
** /stderr **
I0930 19:50:15.261949   14875 retry.go:31] will retry after 35.487502542s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-857381 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-857381 top pods -n kube-system: exit status 1 (65.427692ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-v2sl5, age: 11m24.81398461s

                                                
                                                
** /stderr **
I0930 19:50:50.815890   14875 retry.go:31] will retry after 46.670930499s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-857381 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-857381 top pods -n kube-system: exit status 1 (67.104999ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-v2sl5, age: 12m11.553906143s

                                                
                                                
** /stderr **
I0930 19:51:37.555984   14875 retry.go:31] will retry after 1m25.348902797s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-857381 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-857381 top pods -n kube-system: exit status 1 (63.806555ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-v2sl5, age: 13m36.969786883s

                                                
                                                
** /stderr **
I0930 19:53:02.971615   14875 retry.go:31] will retry after 1m23.368161267s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-857381 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-857381 top pods -n kube-system: exit status 1 (70.151348ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-v2sl5, age: 15m0.408914576s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-857381 -n addons-857381
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-857381 logs -n 25: (1.367686135s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-153563                                                                     | download-only-153563 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| delete  | -p download-only-816611                                                                     | download-only-816611 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| delete  | -p download-only-153563                                                                     | download-only-153563 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-728092 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC |                     |
	|         | binary-mirror-728092                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33837                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-728092                                                                     | binary-mirror-728092 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| addons  | disable dashboard -p                                                                        | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC |                     |
	|         | addons-857381                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC |                     |
	|         | addons-857381                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-857381 --wait=true                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:40 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:48 UTC | 30 Sep 24 19:48 UTC |
	|         | -p addons-857381                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | -p addons-857381                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-857381 ssh cat                                                                       | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | /opt/local-path-provisioner/pvc-2b406b11-e501-447a-83ed-ef44d83e41ee_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-857381 addons                                                                        | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-857381 addons                                                                        | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:49 UTC | 30 Sep 24 19:49 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC | 30 Sep 24 19:50 UTC |
	|         | addons-857381                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC | 30 Sep 24 19:50 UTC |
	|         | addons-857381                                                                               |                      |         |         |                     |                     |
	| ip      | addons-857381 ip                                                                            | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC | 30 Sep 24 19:50 UTC |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC | 30 Sep 24 19:50 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-857381 ssh curl -s                                                                   | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:50 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-857381 ip                                                                            | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:52 UTC | 30 Sep 24 19:52 UTC |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:52 UTC | 30 Sep 24 19:52 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-857381 addons disable                                                                | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:52 UTC | 30 Sep 24 19:52 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-857381 addons                                                                        | addons-857381        | jenkins | v1.34.0 | 30 Sep 24 19:54 UTC | 30 Sep 24 19:54 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 19:38:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 19:38:39.043134   15584 out.go:345] Setting OutFile to fd 1 ...
	I0930 19:38:39.043248   15584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:38:39.043257   15584 out.go:358] Setting ErrFile to fd 2...
	I0930 19:38:39.043261   15584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:38:39.043448   15584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 19:38:39.044075   15584 out.go:352] Setting JSON to false
	I0930 19:38:39.044883   15584 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1262,"bootTime":1727723857,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 19:38:39.044972   15584 start.go:139] virtualization: kvm guest
	I0930 19:38:39.046933   15584 out.go:177] * [addons-857381] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 19:38:39.048464   15584 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 19:38:39.048463   15584 notify.go:220] Checking for updates...
	I0930 19:38:39.051048   15584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 19:38:39.052632   15584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:38:39.054188   15584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:38:39.055634   15584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 19:38:39.056997   15584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 19:38:39.058475   15584 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 19:38:39.092364   15584 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 19:38:39.093649   15584 start.go:297] selected driver: kvm2
	I0930 19:38:39.093667   15584 start.go:901] validating driver "kvm2" against <nil>
	I0930 19:38:39.093686   15584 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 19:38:39.094418   15584 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:38:39.094502   15584 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 19:38:39.109335   15584 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 19:38:39.109387   15584 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 19:38:39.109649   15584 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 19:38:39.109675   15584 cni.go:84] Creating CNI manager for ""
	I0930 19:38:39.109717   15584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 19:38:39.109725   15584 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 19:38:39.109774   15584 start.go:340] cluster config:
	{Name:addons-857381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:38:39.109868   15584 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:38:39.111680   15584 out.go:177] * Starting "addons-857381" primary control-plane node in "addons-857381" cluster
	I0930 19:38:39.113118   15584 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:38:39.113163   15584 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 19:38:39.113173   15584 cache.go:56] Caching tarball of preloaded images
	I0930 19:38:39.113256   15584 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 19:38:39.113267   15584 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 19:38:39.113567   15584 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/config.json ...
	I0930 19:38:39.113591   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/config.json: {Name:mk4745e18a242e742e59d464f9dbb1a3421bf546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:38:39.113723   15584 start.go:360] acquireMachinesLock for addons-857381: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 19:38:39.113764   15584 start.go:364] duration metric: took 29.496µs to acquireMachinesLock for "addons-857381"
	I0930 19:38:39.113781   15584 start.go:93] Provisioning new machine with config: &{Name:addons-857381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 19:38:39.113835   15584 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 19:38:39.115274   15584 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0930 19:38:39.115408   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:38:39.115446   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:38:39.129988   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44615
	I0930 19:38:39.130433   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:38:39.130969   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:38:39.130987   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:38:39.131382   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:38:39.131591   15584 main.go:141] libmachine: (addons-857381) Calling .GetMachineName
	I0930 19:38:39.131741   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:38:39.131909   15584 start.go:159] libmachine.API.Create for "addons-857381" (driver="kvm2")
	I0930 19:38:39.131936   15584 client.go:168] LocalClient.Create starting
	I0930 19:38:39.131981   15584 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 19:38:39.238349   15584 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 19:38:39.522805   15584 main.go:141] libmachine: Running pre-create checks...
	I0930 19:38:39.522832   15584 main.go:141] libmachine: (addons-857381) Calling .PreCreateCheck
	I0930 19:38:39.523321   15584 main.go:141] libmachine: (addons-857381) Calling .GetConfigRaw
	I0930 19:38:39.523777   15584 main.go:141] libmachine: Creating machine...
	I0930 19:38:39.523791   15584 main.go:141] libmachine: (addons-857381) Calling .Create
	I0930 19:38:39.523944   15584 main.go:141] libmachine: (addons-857381) Creating KVM machine...
	I0930 19:38:39.525343   15584 main.go:141] libmachine: (addons-857381) DBG | found existing default KVM network
	I0930 19:38:39.526113   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:39.525972   15606 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0930 19:38:39.526140   15584 main.go:141] libmachine: (addons-857381) DBG | created network xml: 
	I0930 19:38:39.526149   15584 main.go:141] libmachine: (addons-857381) DBG | <network>
	I0930 19:38:39.526158   15584 main.go:141] libmachine: (addons-857381) DBG |   <name>mk-addons-857381</name>
	I0930 19:38:39.526174   15584 main.go:141] libmachine: (addons-857381) DBG |   <dns enable='no'/>
	I0930 19:38:39.526186   15584 main.go:141] libmachine: (addons-857381) DBG |   
	I0930 19:38:39.526201   15584 main.go:141] libmachine: (addons-857381) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0930 19:38:39.526214   15584 main.go:141] libmachine: (addons-857381) DBG |     <dhcp>
	I0930 19:38:39.526224   15584 main.go:141] libmachine: (addons-857381) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0930 19:38:39.526232   15584 main.go:141] libmachine: (addons-857381) DBG |     </dhcp>
	I0930 19:38:39.526241   15584 main.go:141] libmachine: (addons-857381) DBG |   </ip>
	I0930 19:38:39.526248   15584 main.go:141] libmachine: (addons-857381) DBG |   
	I0930 19:38:39.526254   15584 main.go:141] libmachine: (addons-857381) DBG | </network>
	I0930 19:38:39.526262   15584 main.go:141] libmachine: (addons-857381) DBG | 
	I0930 19:38:39.531685   15584 main.go:141] libmachine: (addons-857381) DBG | trying to create private KVM network mk-addons-857381 192.168.39.0/24...
	I0930 19:38:39.600904   15584 main.go:141] libmachine: (addons-857381) DBG | private KVM network mk-addons-857381 192.168.39.0/24 created
	I0930 19:38:39.600935   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:39.600853   15606 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:38:39.601042   15584 main.go:141] libmachine: (addons-857381) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381 ...
	I0930 19:38:39.601166   15584 main.go:141] libmachine: (addons-857381) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 19:38:39.601204   15584 main.go:141] libmachine: (addons-857381) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 19:38:39.863167   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:39.863034   15606 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa...
	I0930 19:38:40.117906   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:40.117761   15606 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/addons-857381.rawdisk...
	I0930 19:38:40.117931   15584 main.go:141] libmachine: (addons-857381) DBG | Writing magic tar header
	I0930 19:38:40.117940   15584 main.go:141] libmachine: (addons-857381) DBG | Writing SSH key tar header
	I0930 19:38:40.117948   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:40.117879   15606 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381 ...
	I0930 19:38:40.117964   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381
	I0930 19:38:40.118020   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 19:38:40.118027   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:38:40.118038   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381 (perms=drwx------)
	I0930 19:38:40.118045   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 19:38:40.118053   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 19:38:40.118058   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 19:38:40.118064   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 19:38:40.118074   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 19:38:40.118079   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home/jenkins
	I0930 19:38:40.118085   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 19:38:40.118093   15584 main.go:141] libmachine: (addons-857381) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 19:38:40.118098   15584 main.go:141] libmachine: (addons-857381) Creating domain...
	I0930 19:38:40.118103   15584 main.go:141] libmachine: (addons-857381) DBG | Checking permissions on dir: /home
	I0930 19:38:40.118110   15584 main.go:141] libmachine: (addons-857381) DBG | Skipping /home - not owner
	I0930 19:38:40.119243   15584 main.go:141] libmachine: (addons-857381) define libvirt domain using xml: 
	I0930 19:38:40.119278   15584 main.go:141] libmachine: (addons-857381) <domain type='kvm'>
	I0930 19:38:40.119287   15584 main.go:141] libmachine: (addons-857381)   <name>addons-857381</name>
	I0930 19:38:40.119298   15584 main.go:141] libmachine: (addons-857381)   <memory unit='MiB'>4000</memory>
	I0930 19:38:40.119306   15584 main.go:141] libmachine: (addons-857381)   <vcpu>2</vcpu>
	I0930 19:38:40.119317   15584 main.go:141] libmachine: (addons-857381)   <features>
	I0930 19:38:40.119329   15584 main.go:141] libmachine: (addons-857381)     <acpi/>
	I0930 19:38:40.119339   15584 main.go:141] libmachine: (addons-857381)     <apic/>
	I0930 19:38:40.119347   15584 main.go:141] libmachine: (addons-857381)     <pae/>
	I0930 19:38:40.119350   15584 main.go:141] libmachine: (addons-857381)     
	I0930 19:38:40.119355   15584 main.go:141] libmachine: (addons-857381)   </features>
	I0930 19:38:40.119360   15584 main.go:141] libmachine: (addons-857381)   <cpu mode='host-passthrough'>
	I0930 19:38:40.119365   15584 main.go:141] libmachine: (addons-857381)   
	I0930 19:38:40.119373   15584 main.go:141] libmachine: (addons-857381)   </cpu>
	I0930 19:38:40.119378   15584 main.go:141] libmachine: (addons-857381)   <os>
	I0930 19:38:40.119383   15584 main.go:141] libmachine: (addons-857381)     <type>hvm</type>
	I0930 19:38:40.119387   15584 main.go:141] libmachine: (addons-857381)     <boot dev='cdrom'/>
	I0930 19:38:40.119394   15584 main.go:141] libmachine: (addons-857381)     <boot dev='hd'/>
	I0930 19:38:40.119399   15584 main.go:141] libmachine: (addons-857381)     <bootmenu enable='no'/>
	I0930 19:38:40.119402   15584 main.go:141] libmachine: (addons-857381)   </os>
	I0930 19:38:40.119407   15584 main.go:141] libmachine: (addons-857381)   <devices>
	I0930 19:38:40.119412   15584 main.go:141] libmachine: (addons-857381)     <disk type='file' device='cdrom'>
	I0930 19:38:40.119420   15584 main.go:141] libmachine: (addons-857381)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/boot2docker.iso'/>
	I0930 19:38:40.119431   15584 main.go:141] libmachine: (addons-857381)       <target dev='hdc' bus='scsi'/>
	I0930 19:38:40.119436   15584 main.go:141] libmachine: (addons-857381)       <readonly/>
	I0930 19:38:40.119440   15584 main.go:141] libmachine: (addons-857381)     </disk>
	I0930 19:38:40.119447   15584 main.go:141] libmachine: (addons-857381)     <disk type='file' device='disk'>
	I0930 19:38:40.119453   15584 main.go:141] libmachine: (addons-857381)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 19:38:40.119460   15584 main.go:141] libmachine: (addons-857381)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/addons-857381.rawdisk'/>
	I0930 19:38:40.119467   15584 main.go:141] libmachine: (addons-857381)       <target dev='hda' bus='virtio'/>
	I0930 19:38:40.119472   15584 main.go:141] libmachine: (addons-857381)     </disk>
	I0930 19:38:40.119476   15584 main.go:141] libmachine: (addons-857381)     <interface type='network'>
	I0930 19:38:40.119482   15584 main.go:141] libmachine: (addons-857381)       <source network='mk-addons-857381'/>
	I0930 19:38:40.119497   15584 main.go:141] libmachine: (addons-857381)       <model type='virtio'/>
	I0930 19:38:40.119547   15584 main.go:141] libmachine: (addons-857381)     </interface>
	I0930 19:38:40.119585   15584 main.go:141] libmachine: (addons-857381)     <interface type='network'>
	I0930 19:38:40.119615   15584 main.go:141] libmachine: (addons-857381)       <source network='default'/>
	I0930 19:38:40.119632   15584 main.go:141] libmachine: (addons-857381)       <model type='virtio'/>
	I0930 19:38:40.119647   15584 main.go:141] libmachine: (addons-857381)     </interface>
	I0930 19:38:40.119657   15584 main.go:141] libmachine: (addons-857381)     <serial type='pty'>
	I0930 19:38:40.119668   15584 main.go:141] libmachine: (addons-857381)       <target port='0'/>
	I0930 19:38:40.119681   15584 main.go:141] libmachine: (addons-857381)     </serial>
	I0930 19:38:40.119692   15584 main.go:141] libmachine: (addons-857381)     <console type='pty'>
	I0930 19:38:40.119705   15584 main.go:141] libmachine: (addons-857381)       <target type='serial' port='0'/>
	I0930 19:38:40.119716   15584 main.go:141] libmachine: (addons-857381)     </console>
	I0930 19:38:40.119728   15584 main.go:141] libmachine: (addons-857381)     <rng model='virtio'>
	I0930 19:38:40.119742   15584 main.go:141] libmachine: (addons-857381)       <backend model='random'>/dev/random</backend>
	I0930 19:38:40.119751   15584 main.go:141] libmachine: (addons-857381)     </rng>
	I0930 19:38:40.119764   15584 main.go:141] libmachine: (addons-857381)     
	I0930 19:38:40.119775   15584 main.go:141] libmachine: (addons-857381)     
	I0930 19:38:40.119787   15584 main.go:141] libmachine: (addons-857381)   </devices>
	I0930 19:38:40.119796   15584 main.go:141] libmachine: (addons-857381) </domain>
	I0930 19:38:40.119808   15584 main.go:141] libmachine: (addons-857381) 
	I0930 19:38:40.152290   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:13:e6:2a in network default
	I0930 19:38:40.152794   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:40.152807   15584 main.go:141] libmachine: (addons-857381) Ensuring networks are active...
	I0930 19:38:40.153769   15584 main.go:141] libmachine: (addons-857381) Ensuring network default is active
	I0930 19:38:40.154084   15584 main.go:141] libmachine: (addons-857381) Ensuring network mk-addons-857381 is active
	I0930 19:38:40.154622   15584 main.go:141] libmachine: (addons-857381) Getting domain xml...
	I0930 19:38:40.155306   15584 main.go:141] libmachine: (addons-857381) Creating domain...
	I0930 19:38:41.750138   15584 main.go:141] libmachine: (addons-857381) Waiting to get IP...
	I0930 19:38:41.750840   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:41.751228   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:41.751257   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:41.751208   15606 retry.go:31] will retry after 219.233908ms: waiting for machine to come up
	I0930 19:38:41.971647   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:41.972164   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:41.972188   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:41.972106   15606 retry.go:31] will retry after 262.030132ms: waiting for machine to come up
	I0930 19:38:42.235394   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:42.235857   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:42.235884   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:42.235807   15606 retry.go:31] will retry after 476.729894ms: waiting for machine to come up
	I0930 19:38:42.714621   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:42.715111   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:42.715165   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:42.715111   15606 retry.go:31] will retry after 585.557ms: waiting for machine to come up
	I0930 19:38:43.301755   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:43.302138   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:43.302170   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:43.302081   15606 retry.go:31] will retry after 660.338313ms: waiting for machine to come up
	I0930 19:38:43.963791   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:43.964219   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:43.964239   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:43.964181   15606 retry.go:31] will retry after 770.621107ms: waiting for machine to come up
	I0930 19:38:44.736897   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:44.737416   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:44.737436   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:44.737400   15606 retry.go:31] will retry after 934.807687ms: waiting for machine to come up
	I0930 19:38:45.673695   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:45.674163   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:45.674192   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:45.674131   15606 retry.go:31] will retry after 1.028873402s: waiting for machine to come up
	I0930 19:38:46.704659   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:46.705228   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:46.705252   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:46.705171   15606 retry.go:31] will retry after 1.355644802s: waiting for machine to come up
	I0930 19:38:48.062629   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:48.063045   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:48.063066   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:48.063003   15606 retry.go:31] will retry after 1.834607389s: waiting for machine to come up
	I0930 19:38:49.899481   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:49.899966   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:49.899993   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:49.899917   15606 retry.go:31] will retry after 2.552900967s: waiting for machine to come up
	I0930 19:38:52.455785   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:52.456329   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:52.456351   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:52.456275   15606 retry.go:31] will retry after 2.738603537s: waiting for machine to come up
	I0930 19:38:55.196845   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:55.197213   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:55.197249   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:55.197206   15606 retry.go:31] will retry after 2.960743363s: waiting for machine to come up
	I0930 19:38:58.161388   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:38:58.161803   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find current IP address of domain addons-857381 in network mk-addons-857381
	I0930 19:38:58.161831   15584 main.go:141] libmachine: (addons-857381) DBG | I0930 19:38:58.161744   15606 retry.go:31] will retry after 3.899735013s: waiting for machine to come up
	I0930 19:39:02.064849   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:02.065350   15584 main.go:141] libmachine: (addons-857381) Found IP for machine: 192.168.39.16
	I0930 19:39:02.065374   15584 main.go:141] libmachine: (addons-857381) Reserving static IP address...
	I0930 19:39:02.065387   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has current primary IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:02.065709   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find host DHCP lease matching {name: "addons-857381", mac: "52:54:00:2f:88:a1", ip: "192.168.39.16"} in network mk-addons-857381
	I0930 19:39:02.140991   15584 main.go:141] libmachine: (addons-857381) DBG | Getting to WaitForSSH function...
	I0930 19:39:02.141024   15584 main.go:141] libmachine: (addons-857381) Reserved static IP address: 192.168.39.16
	I0930 19:39:02.141038   15584 main.go:141] libmachine: (addons-857381) Waiting for SSH to be available...
	I0930 19:39:02.143380   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:02.143712   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381
	I0930 19:39:02.143736   15584 main.go:141] libmachine: (addons-857381) DBG | unable to find defined IP address of network mk-addons-857381 interface with MAC address 52:54:00:2f:88:a1
	I0930 19:39:02.143945   15584 main.go:141] libmachine: (addons-857381) DBG | Using SSH client type: external
	I0930 19:39:02.143968   15584 main.go:141] libmachine: (addons-857381) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa (-rw-------)
	I0930 19:39:02.144015   15584 main.go:141] libmachine: (addons-857381) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 19:39:02.144040   15584 main.go:141] libmachine: (addons-857381) DBG | About to run SSH command:
	I0930 19:39:02.144056   15584 main.go:141] libmachine: (addons-857381) DBG | exit 0
	I0930 19:39:02.155805   15584 main.go:141] libmachine: (addons-857381) DBG | SSH cmd err, output: exit status 255: 
	I0930 19:39:02.155842   15584 main.go:141] libmachine: (addons-857381) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0930 19:39:02.155850   15584 main.go:141] libmachine: (addons-857381) DBG | command : exit 0
	I0930 19:39:02.155855   15584 main.go:141] libmachine: (addons-857381) DBG | err     : exit status 255
	I0930 19:39:02.155862   15584 main.go:141] libmachine: (addons-857381) DBG | output  : 
	I0930 19:39:05.156591   15584 main.go:141] libmachine: (addons-857381) DBG | Getting to WaitForSSH function...
	I0930 19:39:05.159112   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.159471   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.159499   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.159674   15584 main.go:141] libmachine: (addons-857381) DBG | Using SSH client type: external
	I0930 19:39:05.159702   15584 main.go:141] libmachine: (addons-857381) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa (-rw-------)
	I0930 19:39:05.159734   15584 main.go:141] libmachine: (addons-857381) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 19:39:05.159746   15584 main.go:141] libmachine: (addons-857381) DBG | About to run SSH command:
	I0930 19:39:05.159755   15584 main.go:141] libmachine: (addons-857381) DBG | exit 0
	I0930 19:39:05.283731   15584 main.go:141] libmachine: (addons-857381) DBG | SSH cmd err, output: <nil>: 
	I0930 19:39:05.283945   15584 main.go:141] libmachine: (addons-857381) KVM machine creation complete!
	I0930 19:39:05.284267   15584 main.go:141] libmachine: (addons-857381) Calling .GetConfigRaw
	I0930 19:39:05.284805   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:05.285019   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:05.285141   15584 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 19:39:05.285158   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:05.286683   15584 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 19:39:05.286697   15584 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 19:39:05.286701   15584 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 19:39:05.286707   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.288834   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.289132   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.289157   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.289280   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.289449   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.289572   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.289690   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.289873   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.290039   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.290050   15584 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 19:39:05.386984   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:39:05.387014   15584 main.go:141] libmachine: Detecting the provisioner...
	I0930 19:39:05.387029   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.389409   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.389748   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.389776   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.389917   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.390074   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.390198   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.390305   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.390448   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.390666   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.390682   15584 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 19:39:05.492417   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 19:39:05.492481   15584 main.go:141] libmachine: found compatible host: buildroot
	I0930 19:39:05.492489   15584 main.go:141] libmachine: Provisioning with buildroot...
	I0930 19:39:05.492500   15584 main.go:141] libmachine: (addons-857381) Calling .GetMachineName
	I0930 19:39:05.492732   15584 buildroot.go:166] provisioning hostname "addons-857381"
	I0930 19:39:05.492757   15584 main.go:141] libmachine: (addons-857381) Calling .GetMachineName
	I0930 19:39:05.492945   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.495929   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.496239   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.496305   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.496439   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.496644   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.496802   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.496952   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.497104   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.497271   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.497285   15584 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-857381 && echo "addons-857381" | sudo tee /etc/hostname
	I0930 19:39:05.609891   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-857381
	
	I0930 19:39:05.609922   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.612978   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.613698   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.613729   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.613907   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.614121   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.614279   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.614423   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.614594   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.614753   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.614769   15584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-857381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-857381/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-857381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 19:39:05.725738   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:39:05.725765   15584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 19:39:05.725804   15584 buildroot.go:174] setting up certificates
	I0930 19:39:05.725819   15584 provision.go:84] configureAuth start
	I0930 19:39:05.725827   15584 main.go:141] libmachine: (addons-857381) Calling .GetMachineName
	I0930 19:39:05.726168   15584 main.go:141] libmachine: (addons-857381) Calling .GetIP
	I0930 19:39:05.728742   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.729007   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.729035   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.729182   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.731678   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.732051   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.732081   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.732153   15584 provision.go:143] copyHostCerts
	I0930 19:39:05.732229   15584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 19:39:05.732358   15584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 19:39:05.732435   15584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 19:39:05.732484   15584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.addons-857381 san=[127.0.0.1 192.168.39.16 addons-857381 localhost minikube]
	I0930 19:39:05.797657   15584 provision.go:177] copyRemoteCerts
	I0930 19:39:05.797735   15584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 19:39:05.797762   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.800885   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.801217   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.801247   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.801400   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.801568   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.801718   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.801822   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:05.882191   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 19:39:05.905511   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 19:39:05.929051   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 19:39:05.954162   15584 provision.go:87] duration metric: took 228.330604ms to configureAuth
	I0930 19:39:05.954201   15584 buildroot.go:189] setting minikube options for container-runtime
	I0930 19:39:05.954387   15584 config.go:182] Loaded profile config "addons-857381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:39:05.954466   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:05.957503   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.957900   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:05.957927   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:05.958152   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:05.958347   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.958489   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:05.958608   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:05.958729   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:05.958887   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:05.958901   15584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 19:39:06.179208   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 19:39:06.179237   15584 main.go:141] libmachine: Checking connection to Docker...
	I0930 19:39:06.179248   15584 main.go:141] libmachine: (addons-857381) Calling .GetURL
	I0930 19:39:06.180601   15584 main.go:141] libmachine: (addons-857381) DBG | Using libvirt version 6000000
	I0930 19:39:06.182691   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.183033   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.183061   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.183191   15584 main.go:141] libmachine: Docker is up and running!
	I0930 19:39:06.183202   15584 main.go:141] libmachine: Reticulating splines...
	I0930 19:39:06.183209   15584 client.go:171] duration metric: took 27.051264777s to LocalClient.Create
	I0930 19:39:06.183231   15584 start.go:167] duration metric: took 27.051324774s to libmachine.API.Create "addons-857381"
	I0930 19:39:06.183242   15584 start.go:293] postStartSetup for "addons-857381" (driver="kvm2")
	I0930 19:39:06.183251   15584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 19:39:06.183266   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.183524   15584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 19:39:06.183571   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:06.185444   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.185797   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.185827   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.185919   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:06.186090   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.186188   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:06.186312   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:06.266715   15584 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 19:39:06.271185   15584 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 19:39:06.271215   15584 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 19:39:06.271287   15584 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 19:39:06.271309   15584 start.go:296] duration metric: took 88.062379ms for postStartSetup
	I0930 19:39:06.271349   15584 main.go:141] libmachine: (addons-857381) Calling .GetConfigRaw
	I0930 19:39:06.271937   15584 main.go:141] libmachine: (addons-857381) Calling .GetIP
	I0930 19:39:06.274448   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.274725   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.274750   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.274965   15584 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/config.json ...
	I0930 19:39:06.275129   15584 start.go:128] duration metric: took 27.161285737s to createHost
	I0930 19:39:06.275152   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:06.277424   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.277710   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.277737   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.277888   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:06.278053   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.278193   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.278321   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:06.278484   15584 main.go:141] libmachine: Using SSH client type: native
	I0930 19:39:06.278724   15584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0930 19:39:06.278743   15584 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 19:39:06.380303   15584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727725146.359081243
	
	I0930 19:39:06.380326   15584 fix.go:216] guest clock: 1727725146.359081243
	I0930 19:39:06.380335   15584 fix.go:229] Guest: 2024-09-30 19:39:06.359081243 +0000 UTC Remote: 2024-09-30 19:39:06.275140075 +0000 UTC m=+27.266281521 (delta=83.941168ms)
	I0930 19:39:06.380381   15584 fix.go:200] guest clock delta is within tolerance: 83.941168ms
	I0930 19:39:06.380389   15584 start.go:83] releasing machines lock for "addons-857381", held for 27.266614473s
	I0930 19:39:06.380419   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.380674   15584 main.go:141] libmachine: (addons-857381) Calling .GetIP
	I0930 19:39:06.383237   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.383611   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.383640   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.383823   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.384318   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.384453   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:06.384548   15584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 19:39:06.384593   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:06.384651   15584 ssh_runner.go:195] Run: cat /version.json
	I0930 19:39:06.384672   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:06.387480   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.387761   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.387940   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.387970   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.388102   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:06.388230   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:06.388258   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:06.388321   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.388433   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:06.388508   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:06.388576   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:06.388649   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:06.388688   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:06.388794   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:06.460622   15584 ssh_runner.go:195] Run: systemctl --version
	I0930 19:39:06.504333   15584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 19:39:06.659157   15584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 19:39:06.665831   15584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 19:39:06.665921   15584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 19:39:06.682297   15584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 19:39:06.682332   15584 start.go:495] detecting cgroup driver to use...
	I0930 19:39:06.682422   15584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 19:39:06.698736   15584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 19:39:06.713403   15584 docker.go:217] disabling cri-docker service (if available) ...
	I0930 19:39:06.713463   15584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 19:39:06.727772   15584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 19:39:06.741754   15584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 19:39:06.854558   15584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 19:39:07.016805   15584 docker.go:233] disabling docker service ...
	I0930 19:39:07.016868   15584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 19:39:07.031392   15584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 19:39:07.044268   15584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 19:39:07.174815   15584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 19:39:07.288136   15584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 19:39:07.302494   15584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 19:39:07.320346   15584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 19:39:07.320397   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.330567   15584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 19:39:07.330642   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.340540   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.351066   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.361313   15584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 19:39:07.372112   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.382428   15584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.398996   15584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:39:07.409216   15584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 19:39:07.418760   15584 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 19:39:07.418816   15584 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 19:39:07.433137   15584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 19:39:07.442882   15584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:39:07.558112   15584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 19:39:07.649794   15584 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 19:39:07.649899   15584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 19:39:07.654623   15584 start.go:563] Will wait 60s for crictl version
	I0930 19:39:07.654704   15584 ssh_runner.go:195] Run: which crictl
	I0930 19:39:07.658191   15584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 19:39:07.700342   15584 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 19:39:07.700458   15584 ssh_runner.go:195] Run: crio --version
	I0930 19:39:07.727470   15584 ssh_runner.go:195] Run: crio --version
	I0930 19:39:07.754761   15584 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 19:39:07.756216   15584 main.go:141] libmachine: (addons-857381) Calling .GetIP
	I0930 19:39:07.758595   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:07.758998   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:07.759028   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:07.759215   15584 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 19:39:07.763302   15584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:39:07.775047   15584 kubeadm.go:883] updating cluster {Name:addons-857381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 19:39:07.775168   15584 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:39:07.775210   15584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:39:07.807313   15584 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 19:39:07.807388   15584 ssh_runner.go:195] Run: which lz4
	I0930 19:39:07.811181   15584 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 19:39:07.815355   15584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 19:39:07.815401   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 19:39:09.011857   15584 crio.go:462] duration metric: took 1.20070674s to copy over tarball
	I0930 19:39:09.011922   15584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 19:39:11.156167   15584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.144208659s)
	I0930 19:39:11.156197   15584 crio.go:469] duration metric: took 2.144313315s to extract the tarball
	I0930 19:39:11.156204   15584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 19:39:11.192433   15584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:39:11.233108   15584 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 19:39:11.233132   15584 cache_images.go:84] Images are preloaded, skipping loading
	I0930 19:39:11.233139   15584 kubeadm.go:934] updating node { 192.168.39.16 8443 v1.31.1 crio true true} ...
	I0930 19:39:11.233269   15584 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-857381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 19:39:11.233352   15584 ssh_runner.go:195] Run: crio config
	I0930 19:39:11.277191   15584 cni.go:84] Creating CNI manager for ""
	I0930 19:39:11.277215   15584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 19:39:11.277225   15584 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 19:39:11.277248   15584 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-857381 NodeName:addons-857381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 19:39:11.277363   15584 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-857381"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 19:39:11.277418   15584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 19:39:11.286642   15584 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 19:39:11.286704   15584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 19:39:11.295548   15584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0930 19:39:11.311549   15584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 19:39:11.331985   15584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0930 19:39:11.348728   15584 ssh_runner.go:195] Run: grep 192.168.39.16	control-plane.minikube.internal$ /etc/hosts
	I0930 19:39:11.352327   15584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:39:11.364401   15584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:39:11.481660   15584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 19:39:11.497079   15584 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381 for IP: 192.168.39.16
	I0930 19:39:11.497100   15584 certs.go:194] generating shared ca certs ...
	I0930 19:39:11.497116   15584 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.497260   15584 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 19:39:11.648998   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt ...
	I0930 19:39:11.649025   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt: {Name:mk6e5f82ec05fd1020277cb50e5cfcc0dabcacae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.649213   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key ...
	I0930 19:39:11.649229   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key: {Name:mk0ef923818a162097b78148b543208a914b5bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.649322   15584 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 19:39:11.753260   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt ...
	I0930 19:39:11.753290   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt: {Name:mke9d528b1a86f83c00d6802b8724e9dc7fcbf2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.753464   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key ...
	I0930 19:39:11.753479   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key: {Name:mk8d6f919cfde9b2ba252ed4e645dd7abe933692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.753574   15584 certs.go:256] generating profile certs ...
	I0930 19:39:11.753638   15584 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.key
	I0930 19:39:11.753663   15584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt with IP's: []
	I0930 19:39:11.993825   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt ...
	I0930 19:39:11.993862   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: {Name:mkfdecb09e1eaad0bf5d023250541bd133526bf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.994031   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.key ...
	I0930 19:39:11.994043   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.key: {Name:mk5b3d09b580d0cb32db7795505ff42b338bebcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:11.994106   15584 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key.2630616d
	I0930 19:39:11.994123   15584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt.2630616d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.16]
	I0930 19:39:12.123421   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt.2630616d ...
	I0930 19:39:12.123454   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt.2630616d: {Name:mk0c51fdbf5c30101d513ddc20b36e402092303f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:12.123638   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key.2630616d ...
	I0930 19:39:12.123655   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key.2630616d: {Name:mk22e6929637babbf135e841e671bfe79d76bb0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:12.123725   15584 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt.2630616d -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt
	I0930 19:39:12.123793   15584 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key.2630616d -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key
	I0930 19:39:12.123839   15584 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.key
	I0930 19:39:12.123854   15584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.crt with IP's: []
	I0930 19:39:12.195319   15584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.crt ...
	I0930 19:39:12.195350   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.crt: {Name:mk713b9e40199aa6c8687b380ad01559be53ec34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:12.195497   15584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.key ...
	I0930 19:39:12.195507   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.key: {Name:mkea90975034f67fe95bb6a85ec32c0ef43e68e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:12.195696   15584 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 19:39:12.195729   15584 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 19:39:12.195751   15584 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 19:39:12.195774   15584 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 19:39:12.196294   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 19:39:12.223952   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 19:39:12.246370   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 19:39:12.279886   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 19:39:12.303029   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0930 19:39:12.325838   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 19:39:12.349163   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 19:39:12.372806   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 19:39:12.396187   15584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 19:39:12.420192   15584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 19:39:12.436976   15584 ssh_runner.go:195] Run: openssl version
	I0930 19:39:12.442204   15584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 19:39:12.452601   15584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:39:12.456833   15584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:39:12.456888   15584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:39:12.462315   15584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 19:39:12.472654   15584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 19:39:12.476710   15584 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 19:39:12.476772   15584 kubeadm.go:392] StartCluster: {Name:addons-857381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-857381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:39:12.476843   15584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 19:39:12.476890   15584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 19:39:12.509454   15584 cri.go:89] found id: ""
	I0930 19:39:12.509518   15584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 19:39:12.519690   15584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 19:39:12.528634   15584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 19:39:12.537558   15584 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 19:39:12.537580   15584 kubeadm.go:157] found existing configuration files:
	
	I0930 19:39:12.537627   15584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 19:39:12.546562   15584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 19:39:12.546615   15584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 19:39:12.555210   15584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 19:39:12.563709   15584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 19:39:12.563764   15584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 19:39:12.572594   15584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 19:39:12.580936   15584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 19:39:12.580987   15584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 19:39:12.589574   15584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 19:39:12.597837   15584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 19:39:12.597888   15584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 19:39:12.606734   15584 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 19:39:12.656495   15584 kubeadm.go:310] W0930 19:39:12.641183     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:39:12.657151   15584 kubeadm.go:310] W0930 19:39:12.642020     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:39:12.764273   15584 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 19:39:22.111607   15584 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 19:39:22.111685   15584 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 19:39:22.111776   15584 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 19:39:22.111893   15584 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 19:39:22.112027   15584 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 19:39:22.112104   15584 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 19:39:22.113710   15584 out.go:235]   - Generating certificates and keys ...
	I0930 19:39:22.113790   15584 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 19:39:22.113862   15584 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 19:39:22.113958   15584 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 19:39:22.114050   15584 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 19:39:22.114143   15584 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 19:39:22.114222   15584 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 19:39:22.114302   15584 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 19:39:22.114414   15584 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-857381 localhost] and IPs [192.168.39.16 127.0.0.1 ::1]
	I0930 19:39:22.114460   15584 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 19:39:22.114592   15584 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-857381 localhost] and IPs [192.168.39.16 127.0.0.1 ::1]
	I0930 19:39:22.114664   15584 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 19:39:22.114748   15584 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 19:39:22.114814   15584 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 19:39:22.114901   15584 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 19:39:22.114973   15584 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 19:39:22.115058   15584 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 19:39:22.115139   15584 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 19:39:22.115211   15584 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 19:39:22.115281   15584 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 19:39:22.115360   15584 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 19:39:22.115417   15584 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 19:39:22.116907   15584 out.go:235]   - Booting up control plane ...
	I0930 19:39:22.116999   15584 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 19:39:22.117066   15584 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 19:39:22.117129   15584 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 19:39:22.117234   15584 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 19:39:22.117369   15584 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 19:39:22.117427   15584 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 19:39:22.117597   15584 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 19:39:22.117746   15584 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 19:39:22.117827   15584 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.864878ms
	I0930 19:39:22.117935   15584 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 19:39:22.118041   15584 kubeadm.go:310] [api-check] The API server is healthy after 5.00170551s
	I0930 19:39:22.118221   15584 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 19:39:22.118406   15584 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 19:39:22.118481   15584 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 19:39:22.118679   15584 kubeadm.go:310] [mark-control-plane] Marking the node addons-857381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 19:39:22.118753   15584 kubeadm.go:310] [bootstrap-token] Using token: 2zqthc.qj6bpwsk1i25jfw6
	I0930 19:39:22.120480   15584 out.go:235]   - Configuring RBAC rules ...
	I0930 19:39:22.120608   15584 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 19:39:22.120680   15584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 19:39:22.120802   15584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 19:39:22.120917   15584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 19:39:22.121021   15584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 19:39:22.121095   15584 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 19:39:22.121200   15584 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 19:39:22.121239   15584 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 19:39:22.121286   15584 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 19:39:22.121292   15584 kubeadm.go:310] 
	I0930 19:39:22.121363   15584 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 19:39:22.121375   15584 kubeadm.go:310] 
	I0930 19:39:22.121489   15584 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 19:39:22.121521   15584 kubeadm.go:310] 
	I0930 19:39:22.121561   15584 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 19:39:22.121648   15584 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 19:39:22.121728   15584 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 19:39:22.121740   15584 kubeadm.go:310] 
	I0930 19:39:22.121818   15584 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 19:39:22.121825   15584 kubeadm.go:310] 
	I0930 19:39:22.121895   15584 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 19:39:22.121904   15584 kubeadm.go:310] 
	I0930 19:39:22.121982   15584 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 19:39:22.122058   15584 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 19:39:22.122127   15584 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 19:39:22.122134   15584 kubeadm.go:310] 
	I0930 19:39:22.122209   15584 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 19:39:22.122279   15584 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 19:39:22.122285   15584 kubeadm.go:310] 
	I0930 19:39:22.122360   15584 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2zqthc.qj6bpwsk1i25jfw6 \
	I0930 19:39:22.122450   15584 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 19:39:22.122473   15584 kubeadm.go:310] 	--control-plane 
	I0930 19:39:22.122482   15584 kubeadm.go:310] 
	I0930 19:39:22.122556   15584 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 19:39:22.122562   15584 kubeadm.go:310] 
	I0930 19:39:22.122633   15584 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2zqthc.qj6bpwsk1i25jfw6 \
	I0930 19:39:22.122742   15584 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 19:39:22.122753   15584 cni.go:84] Creating CNI manager for ""
	I0930 19:39:22.122760   15584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 19:39:22.124276   15584 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 19:39:22.125392   15584 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 19:39:22.137298   15584 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 19:39:22.159047   15584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 19:39:22.159160   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:22.159174   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-857381 minikube.k8s.io/updated_at=2024_09_30T19_39_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=addons-857381 minikube.k8s.io/primary=true
	I0930 19:39:22.178203   15584 ops.go:34] apiserver oom_adj: -16
	I0930 19:39:22.298845   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:22.799840   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:23.299680   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:23.799875   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:24.298916   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:24.799796   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:25.299026   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:25.799660   15584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:39:25.868472   15584 kubeadm.go:1113] duration metric: took 3.709383377s to wait for elevateKubeSystemPrivileges
	I0930 19:39:25.868505   15584 kubeadm.go:394] duration metric: took 13.391737223s to StartCluster
	I0930 19:39:25.868523   15584 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:25.868662   15584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:39:25.869112   15584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:39:25.869296   15584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 19:39:25.869324   15584 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 19:39:25.869370   15584 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0930 19:39:25.869469   15584 addons.go:69] Setting gcp-auth=true in profile "addons-857381"
	I0930 19:39:25.869486   15584 addons.go:69] Setting ingress-dns=true in profile "addons-857381"
	I0930 19:39:25.869501   15584 addons.go:234] Setting addon ingress-dns=true in "addons-857381"
	I0930 19:39:25.869494   15584 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-857381"
	I0930 19:39:25.869513   15584 addons.go:69] Setting registry=true in profile "addons-857381"
	I0930 19:39:25.869513   15584 addons.go:69] Setting cloud-spanner=true in profile "addons-857381"
	I0930 19:39:25.869525   15584 addons.go:69] Setting metrics-server=true in profile "addons-857381"
	I0930 19:39:25.869535   15584 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-857381"
	I0930 19:39:25.869536   15584 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-857381"
	I0930 19:39:25.869543   15584 addons.go:234] Setting addon cloud-spanner=true in "addons-857381"
	I0930 19:39:25.869551   15584 addons.go:69] Setting inspektor-gadget=true in profile "addons-857381"
	I0930 19:39:25.869553   15584 addons.go:69] Setting volumesnapshots=true in profile "addons-857381"
	I0930 19:39:25.869554   15584 addons.go:69] Setting storage-provisioner=true in profile "addons-857381"
	I0930 19:39:25.869565   15584 addons.go:234] Setting addon inspektor-gadget=true in "addons-857381"
	I0930 19:39:25.869565   15584 addons.go:234] Setting addon volumesnapshots=true in "addons-857381"
	I0930 19:39:25.869582   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869588   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869601   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869505   15584 mustload.go:65] Loading cluster: addons-857381
	I0930 19:39:25.869549   15584 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-857381"
	I0930 19:39:25.869775   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869847   15584 config.go:182] Loaded profile config "addons-857381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:39:25.870033   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870035   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870078   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870100   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.869567   15584 addons.go:234] Setting addon storage-provisioner=true in "addons-857381"
	I0930 19:39:25.870132   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870145   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869529   15584 addons.go:234] Setting addon registry=true in "addons-857381"
	I0930 19:39:25.870175   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870197   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870083   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870195   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869511   15584 config.go:182] Loaded profile config "addons-857381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:39:25.870526   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.869544   15584 addons.go:69] Setting volcano=true in profile "addons-857381"
	I0930 19:39:25.870546   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870557   15584 addons.go:234] Setting addon volcano=true in "addons-857381"
	I0930 19:39:25.870583   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869482   15584 addons.go:69] Setting ingress=true in profile "addons-857381"
	I0930 19:39:25.870706   15584 addons.go:234] Setting addon ingress=true in "addons-857381"
	I0930 19:39:25.870739   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.870748   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870773   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870897   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.870911   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.871085   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.871115   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.869473   15584 addons.go:69] Setting yakd=true in profile "addons-857381"
	I0930 19:39:25.871269   15584 addons.go:234] Setting addon yakd=true in "addons-857381"
	I0930 19:39:25.871297   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869520   15584 addons.go:69] Setting default-storageclass=true in profile "addons-857381"
	I0930 19:39:25.871410   15584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-857381"
	I0930 19:39:25.871679   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.871704   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.869539   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.869545   15584 addons.go:234] Setting addon metrics-server=true in "addons-857381"
	I0930 19:39:25.871938   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.872087   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.872111   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.872268   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.872297   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.869546   15584 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-857381"
	I0930 19:39:25.869552   15584 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-857381"
	I0930 19:39:25.870118   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.873240   15584 out.go:177] * Verifying Kubernetes components...
	I0930 19:39:25.874824   15584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:39:25.875031   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.875068   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.870165   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.875837   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.891609   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I0930 19:39:25.891622   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0930 19:39:25.892198   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.892648   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.892839   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.892856   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.892958   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34113
	I0930 19:39:25.893205   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.893224   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.893339   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.893526   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.893609   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.893925   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.893942   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.893985   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.894012   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.894209   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.894231   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.894604   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.896401   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32887
	I0930 19:39:25.901911   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34897
	I0930 19:39:25.908027   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.908062   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.908658   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.908681   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.910137   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36075
	I0930 19:39:25.910232   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38099
	I0930 19:39:25.910381   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.910420   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.910689   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.910814   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.910889   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.911356   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.911384   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.911518   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.911547   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.911704   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.911720   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.911760   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35065
	I0930 19:39:25.912108   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.912153   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.912245   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.912754   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.912787   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.913013   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.913047   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.913204   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.913221   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.913281   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.913621   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.914224   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.914247   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.919833   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.920758   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.920793   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.928106   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.928373   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.930483   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.930920   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.930971   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.943442   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34069
	I0930 19:39:25.946158   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0930 19:39:25.946301   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42649
	I0930 19:39:25.946399   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.947919   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.947941   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.948022   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.948109   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37203
	I0930 19:39:25.948121   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.948168   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.948220   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0930 19:39:25.948395   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45111
	I0930 19:39:25.949364   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.949469   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.949482   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.949486   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.949535   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.950004   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.950017   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.950055   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.950147   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.950154   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.950161   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.950173   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.950552   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.950566   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.950629   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.951116   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.951576   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.951610   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.951746   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.951981   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.952074   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.952099   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.952588   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.953272   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.953294   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.953679   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.953882   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.954158   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.954184   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.954412   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.955485   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.955737   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:25.955751   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:25.955806   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.956180   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:25.956201   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:25.956207   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:25.956216   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:25.957588   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:25.957390   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0930 19:39:25.957452   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41277
	I0930 19:39:25.957946   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:25.957983   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:25.957992   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	W0930 19:39:25.958081   15584 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0930 19:39:25.958401   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.958881   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.958900   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.958987   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42313
	I0930 19:39:25.959289   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.959314   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.959474   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0930 19:39:25.959492   15584 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0930 19:39:25.959513   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.959875   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.959897   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.960126   15584 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0930 19:39:25.960524   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.960672   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.961838   15584 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0930 19:39:25.961855   15584 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0930 19:39:25.961885   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.962881   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.962921   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.965353   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.967465   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.967720   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.967752   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.967998   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.968211   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.968229   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.968253   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.968412   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.968456   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.968558   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:25.968871   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.969023   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.969358   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:25.969828   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I0930 19:39:25.971542   15584 addons.go:234] Setting addon default-storageclass=true in "addons-857381"
	I0930 19:39:25.971578   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.971945   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.971965   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.973722   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0930 19:39:25.974115   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45175
	I0930 19:39:25.974519   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.974915   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.975095   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.975108   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.975433   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.975634   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.975824   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.976012   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.976033   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.976430   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.976444   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.976501   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.976683   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.977028   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.977624   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.977661   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.977877   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.979689   15584 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-857381"
	I0930 19:39:25.979733   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:25.980117   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:25.980151   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:25.981658   15584 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0930 19:39:25.982583   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I0930 19:39:25.983098   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.983567   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43789
	I0930 19:39:25.983865   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.983878   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.984274   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.984379   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.984563   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.984759   15584 out.go:177]   - Using image docker.io/registry:2.8.3
	I0930 19:39:25.984836   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.984863   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.985186   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.985334   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.986318   15584 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0930 19:39:25.986335   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0930 19:39:25.986353   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.987060   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.987776   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0930 19:39:25.988280   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:25.988862   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:25.988877   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:25.988935   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.989074   15584 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0930 19:39:25.989812   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:25.990023   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.990033   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:25.990473   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.990510   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.990574   15584 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 19:39:25.990597   15584 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 19:39:25.990617   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.991173   15584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 19:39:25.991455   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.991620   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.991751   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.991860   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:25.993542   15584 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 19:39:25.993741   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 19:39:25.993761   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.993705   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:25.994528   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.995054   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.995071   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.995363   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.995558   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.995716   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.995862   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:25.996207   15584 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0930 19:39:25.997530   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.997597   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0930 19:39:25.997617   15584 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0930 19:39:25.997635   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:25.997905   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:25.997931   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:25.998174   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:25.998350   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:25.998496   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:25.998614   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.001113   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.001606   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.001633   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.001819   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.001978   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.002102   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.002213   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.002507   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35615
	I0930 19:39:26.003016   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.003573   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.003590   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.004001   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.004290   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.007901   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
	I0930 19:39:26.007985   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I0930 19:39:26.008624   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.009653   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0930 19:39:26.010668   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.010726   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.011079   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.011091   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.011295   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.011657   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.011732   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.011763   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.012575   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.012669   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0930 19:39:26.012829   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.013000   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.013407   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.013606   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.013621   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.013968   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.014049   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.014065   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.014119   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.014353   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.014494   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.014944   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.015656   15584 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0930 19:39:26.016134   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.016798   15584 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0930 19:39:26.017425   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.017622   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34427
	I0930 19:39:26.017897   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.018270   15584 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0930 19:39:26.018286   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0930 19:39:26.018301   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.018271   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.018352   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.018646   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.018937   15584 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0930 19:39:26.018974   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0930 19:39:26.019146   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:26.019175   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:26.019458   15584 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 19:39:26.019469   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0930 19:39:26.019480   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.022308   15584 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 19:39:26.022318   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0930 19:39:26.022462   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.023468   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.023512   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.023547   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.023574   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40001
	I0930 19:39:26.023698   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.023999   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.024081   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.024161   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.024178   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.024276   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.024400   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.024502   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.024632   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.025111   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0930 19:39:26.025197   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.025201   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.025212   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.025377   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.025647   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.025709   15584 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 19:39:26.025818   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.026733   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38173
	I0930 19:39:26.027178   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.028031   15584 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 19:39:26.028049   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0930 19:39:26.028119   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.028131   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.028181   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.028202   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.028442   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0930 19:39:26.029148   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.029701   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:26.029741   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:26.030064   15584 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0930 19:39:26.031125   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.031427   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0930 19:39:26.031525   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.031567   15584 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 19:39:26.031571   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.031579   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0930 19:39:26.031598   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.031737   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.031852   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.032014   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.032136   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.034693   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0930 19:39:26.035043   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.035464   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.035521   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.035730   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.035883   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.035993   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.036170   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.037151   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0930 19:39:26.038304   15584 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0930 19:39:26.039572   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0930 19:39:26.039593   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0930 19:39:26.039616   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.042725   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.043135   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.043161   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.043322   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.043504   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.043649   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.043779   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.046214   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42533
	I0930 19:39:26.046708   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.047211   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.047230   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.047643   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34181
	I0930 19:39:26.047658   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.047829   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.048012   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:26.048450   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:26.048463   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:26.048874   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:26.049079   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:26.049587   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.049871   15584 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 19:39:26.049894   15584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 19:39:26.049910   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:26.050844   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:26.053693   15584 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0930 19:39:26.053892   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.054150   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.054175   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.054350   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.054606   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.054743   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.054898   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:26.057159   15584 out.go:177]   - Using image docker.io/busybox:stable
	I0930 19:39:26.058444   15584 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 19:39:26.058456   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0930 19:39:26.058471   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	W0930 19:39:26.058658   15584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34418->192.168.39.16:22: read: connection reset by peer
	I0930 19:39:26.058676   15584 retry.go:31] will retry after 237.78819ms: ssh: handshake failed: read tcp 192.168.39.1:34418->192.168.39.16:22: read: connection reset by peer
	I0930 19:39:26.061619   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.061962   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:26.062006   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:26.062106   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:26.062224   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:26.062300   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:26.062361   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	W0930 19:39:26.065959   15584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34426->192.168.39.16:22: read: connection reset by peer
	I0930 19:39:26.065979   15584 retry.go:31] will retry after 167.277624ms: ssh: handshake failed: read tcp 192.168.39.1:34426->192.168.39.16:22: read: connection reset by peer
	I0930 19:39:26.339466   15584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 19:39:26.339517   15584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 19:39:26.403846   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0930 19:39:26.403877   15584 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0930 19:39:26.418875   15584 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0930 19:39:26.418902   15584 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0930 19:39:26.444724   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 19:39:26.469397   15584 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0930 19:39:26.469428   15584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0930 19:39:26.470418   15584 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 19:39:26.470454   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0930 19:39:26.484974   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0930 19:39:26.490665   15584 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0930 19:39:26.490690   15584 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0930 19:39:26.517120   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 19:39:26.544379   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 19:39:26.563968   15584 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0930 19:39:26.563993   15584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0930 19:39:26.604180   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0930 19:39:26.604208   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0930 19:39:26.620313   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 19:39:26.672698   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0930 19:39:26.672723   15584 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0930 19:39:26.688307   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 19:39:26.714792   15584 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0930 19:39:26.714816   15584 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0930 19:39:26.728893   15584 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 19:39:26.728920   15584 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 19:39:26.744719   15584 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0930 19:39:26.744745   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0930 19:39:26.842193   15584 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0930 19:39:26.842218   15584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0930 19:39:26.859317   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 19:39:26.899446   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0930 19:39:26.899471   15584 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0930 19:39:26.904707   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0930 19:39:26.904731   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0930 19:39:26.961885   15584 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0930 19:39:26.961904   15584 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0930 19:39:26.962165   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0930 19:39:26.962184   15584 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0930 19:39:26.977061   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0930 19:39:27.039064   15584 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 19:39:27.039095   15584 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 19:39:27.067135   15584 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 19:39:27.067165   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0930 19:39:27.144070   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0930 19:39:27.144093   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0930 19:39:27.181844   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 19:39:27.204338   15584 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0930 19:39:27.204364   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0930 19:39:27.262301   15584 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0930 19:39:27.262328   15584 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0930 19:39:27.319423   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 19:39:27.366509   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0930 19:39:27.366531   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0930 19:39:27.474305   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0930 19:39:27.577560   15584 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0930 19:39:27.577589   15584 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0930 19:39:27.717753   15584 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0930 19:39:27.717785   15584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0930 19:39:27.874602   15584 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0930 19:39:27.874633   15584 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0930 19:39:27.969590   15584 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0930 19:39:27.969615   15584 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0930 19:39:28.141702   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0930 19:39:28.141732   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0930 19:39:28.341745   15584 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 19:39:28.341776   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0930 19:39:28.455162   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0930 19:39:28.455188   15584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0930 19:39:28.678401   15584 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.338898628s)
	I0930 19:39:28.678417   15584 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.338851725s)
	I0930 19:39:28.678450   15584 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0930 19:39:28.679459   15584 node_ready.go:35] waiting up to 6m0s for node "addons-857381" to be "Ready" ...
	I0930 19:39:28.692964   15584 node_ready.go:49] node "addons-857381" has status "Ready":"True"
	I0930 19:39:28.693006   15584 node_ready.go:38] duration metric: took 13.512917ms for node "addons-857381" to be "Ready" ...
	I0930 19:39:28.693018   15584 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 19:39:28.694835   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 19:39:28.724666   15584 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace to be "Ready" ...
	I0930 19:39:28.817994   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0930 19:39:28.818022   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0930 19:39:29.132262   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0930 19:39:29.132290   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0930 19:39:29.194565   15584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-857381" context rescaled to 1 replicas
	I0930 19:39:29.322176   15584 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 19:39:29.322196   15584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0930 19:39:29.581322   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 19:39:30.236110   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.751106656s)
	I0930 19:39:30.236157   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236166   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236216   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.719062545s)
	I0930 19:39:30.236266   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236287   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236293   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.691892299s)
	I0930 19:39:30.236308   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236318   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236701   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.236710   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.236724   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.236732   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.236735   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.236742   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236746   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.236750   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236752   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.236754   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236761   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.236770   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236772   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.236762   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.236906   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.792152494s)
	I0930 19:39:30.236927   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.236955   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.237054   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.237074   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.237097   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.237099   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.237107   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.237108   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.236777   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.238459   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.238460   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.238486   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.238495   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.238502   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.238496   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.238513   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:30.238523   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:30.238750   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:30.238766   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:30.238817   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:30.745068   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:32.778531   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:33.027172   15584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0930 19:39:33.027218   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:33.031039   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:33.031563   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:33.031606   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:33.031748   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:33.031947   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:33.032091   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:33.032216   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:33.310796   15584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0930 19:39:33.432989   15584 addons.go:234] Setting addon gcp-auth=true in "addons-857381"
	I0930 19:39:33.433075   15584 host.go:66] Checking if "addons-857381" exists ...
	I0930 19:39:33.433505   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:33.433542   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:33.450114   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
	I0930 19:39:33.450542   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:33.451073   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:33.451091   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:33.451989   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:33.452643   15584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:39:33.452678   15584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:39:33.467603   15584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0930 19:39:33.468080   15584 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:39:33.468533   15584 main.go:141] libmachine: Using API Version  1
	I0930 19:39:33.468552   15584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:39:33.468882   15584 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:39:33.469131   15584 main.go:141] libmachine: (addons-857381) Calling .GetState
	I0930 19:39:33.470845   15584 main.go:141] libmachine: (addons-857381) Calling .DriverName
	I0930 19:39:33.471095   15584 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0930 19:39:33.471131   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHHostname
	I0930 19:39:33.473943   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:33.474399   15584 main.go:141] libmachine: (addons-857381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:88:a1", ip: ""} in network mk-addons-857381: {Iface:virbr1 ExpiryTime:2024-09-30 20:38:54 +0000 UTC Type:0 Mac:52:54:00:2f:88:a1 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-857381 Clientid:01:52:54:00:2f:88:a1}
	I0930 19:39:33.474457   15584 main.go:141] libmachine: (addons-857381) DBG | domain addons-857381 has defined IP address 192.168.39.16 and MAC address 52:54:00:2f:88:a1 in network mk-addons-857381
	I0930 19:39:33.474555   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHPort
	I0930 19:39:33.474733   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHKeyPath
	I0930 19:39:33.474879   15584 main.go:141] libmachine: (addons-857381) Calling .GetSSHUsername
	I0930 19:39:33.475055   15584 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/addons-857381/id_rsa Username:docker}
	I0930 19:39:34.292964   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.672612289s)
	I0930 19:39:34.293018   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293031   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293110   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.604771882s)
	I0930 19:39:34.293148   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293160   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.433811665s)
	I0930 19:39:34.293184   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293196   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293161   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293304   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.111420616s)
	W0930 19:39:34.293345   15584 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 19:39:34.293376   15584 retry.go:31] will retry after 271.524616ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 19:39:34.293201   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.316113203s)
	I0930 19:39:34.293411   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293416   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.293425   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293425   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.293435   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.293443   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293449   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293531   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.293542   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.293553   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293561   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293579   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.293558   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.974102674s)
	I0930 19:39:34.293609   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.819279733s)
	I0930 19:39:34.293623   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293629   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293637   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293640   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293652   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.293625   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.293675   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.293680   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.293684   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.293688   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.293692   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.293697   15584 addons.go:475] Verifying addon ingress=true in "addons-857381"
	I0930 19:39:34.293758   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.598892526s)
	I0930 19:39:34.293777   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294035   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.294048   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.294075   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294081   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294089   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294095   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.294103   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294111   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294121   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294128   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.294135   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.294152   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294158   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294343   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.294367   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294374   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294390   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294397   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.294437   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.294456   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.294462   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.294469   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.294482   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.295624   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.295658   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.295665   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.296494   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.296522   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.296528   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.296878   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.296887   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.296895   15584 addons.go:475] Verifying addon registry=true in "addons-857381"
	I0930 19:39:34.296919   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.296931   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.297440   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.297455   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.296941   15584 addons.go:475] Verifying addon metrics-server=true in "addons-857381"
	I0930 19:39:34.299354   15584 out.go:177] * Verifying ingress addon...
	I0930 19:39:34.299415   15584 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-857381 service yakd-dashboard -n yakd-dashboard
	
	I0930 19:39:34.299358   15584 out.go:177] * Verifying registry addon...
	I0930 19:39:34.301748   15584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0930 19:39:34.303967   15584 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0930 19:39:34.347114   15584 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 19:39:34.347135   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:34.347645   15584 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0930 19:39:34.347667   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:34.379293   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.379322   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.379589   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:34.379665   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.379683   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	W0930 19:39:34.379773   15584 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0930 19:39:34.391480   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:34.391514   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:34.391850   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:34.391871   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:34.565511   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 19:39:34.806600   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:34.810513   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:35.232349   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:35.308666   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:35.309108   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:35.828683   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.247295259s)
	I0930 19:39:35.828738   15584 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.357617005s)
	I0930 19:39:35.828744   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:35.828881   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:35.829247   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:35.829301   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:35.829316   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:35.829324   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:35.829631   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:35.829656   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:35.829663   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:35.829671   15584 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-857381"
	I0930 19:39:35.830414   15584 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0930 19:39:35.831442   15584 out.go:177] * Verifying csi-hostpath-driver addon...
	I0930 19:39:35.833074   15584 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 19:39:35.834046   15584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0930 19:39:35.834254   15584 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0930 19:39:35.834271   15584 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0930 19:39:35.839940   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:35.840343   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:35.847244   15584 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 19:39:35.847276   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:35.938617   15584 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0930 19:39:35.938652   15584 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0930 19:39:36.063928   15584 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 19:39:36.063961   15584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0930 19:39:36.120314   15584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 19:39:36.309391   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:36.314236   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:36.340348   15584 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 19:39:36.340371   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:36.804872   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.239314953s)
	I0930 19:39:36.804918   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:36.804933   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:36.805171   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:36.805189   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:36.805199   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:36.805208   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:36.805433   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:36.805454   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:36.967227   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:36.967460   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:36.967876   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:37.247223   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:37.307184   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:37.314533   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:37.345378   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:37.526802   15584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.406437983s)
	I0930 19:39:37.526855   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:37.526879   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:37.527198   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:37.527257   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:37.527271   15584 main.go:141] libmachine: Making call to close driver server
	I0930 19:39:37.527280   15584 main.go:141] libmachine: (addons-857381) Calling .Close
	I0930 19:39:37.527210   15584 main.go:141] libmachine: (addons-857381) DBG | Closing plugin on server side
	I0930 19:39:37.527501   15584 main.go:141] libmachine: Successfully made call to close driver server
	I0930 19:39:37.527522   15584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 19:39:37.529551   15584 addons.go:475] Verifying addon gcp-auth=true in "addons-857381"
	I0930 19:39:37.531033   15584 out.go:177] * Verifying gcp-auth addon...
	I0930 19:39:37.533661   15584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0930 19:39:37.562401   15584 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 19:39:37.562432   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:37.806737   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:37.809253   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:37.839020   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:38.038065   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:38.305905   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:38.309675   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:38.339300   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:38.537175   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:38.807194   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:38.808182   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:38.839444   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:39.038213   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:39.305965   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:39.307430   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:39.339933   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:39.538121   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:39.731775   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:39.806783   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:39.808801   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:39.839365   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:40.037438   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:40.306846   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:40.308993   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:40.338409   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:40.538055   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:40.806222   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:40.808300   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:40.843451   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:41.038963   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:41.227711   15584 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jn2h5" not found
	I0930 19:39:41.227748   15584 pod_ready.go:82] duration metric: took 12.503044527s for pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace to be "Ready" ...
	E0930 19:39:41.227761   15584 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-jn2h5" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jn2h5" not found
	I0930 19:39:41.227771   15584 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace to be "Ready" ...
	I0930 19:39:41.308109   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:41.309908   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:41.338978   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:41.537501   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:41.808520   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:41.809542   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:41.840311   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:42.148099   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:42.306741   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:42.308939   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:42.338534   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:42.537098   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:42.805061   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:42.807375   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:42.838837   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:43.037381   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:43.234216   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:43.305308   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:43.308022   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:43.339943   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:43.537233   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:43.805707   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:43.811783   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:43.839510   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:44.037858   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:44.306420   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:44.308934   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:44.338485   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:44.537622   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:44.806844   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:44.808702   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:44.838957   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:45.036848   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:45.234876   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:45.306328   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:45.308712   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:45.343763   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:45.536859   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:45.806211   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:45.808798   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:45.839561   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:46.037708   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:46.308046   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:46.308610   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:46.339634   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:46.537600   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:46.805549   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:46.807820   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:46.838167   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:47.037473   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:47.306050   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:47.308153   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:47.339967   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:47.537051   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:47.734887   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:47.813723   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:47.814301   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:47.840811   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:48.038333   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:48.311855   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:48.312416   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:48.341988   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:48.537651   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:48.806200   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:48.809450   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:48.838999   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:49.037711   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:49.305793   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:49.307907   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:49.339445   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:49.537409   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:49.806209   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:49.808533   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:49.839853   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:50.037854   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:50.234421   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:50.306910   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:50.308611   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:50.339584   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:50.546089   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:50.806461   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:50.808559   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:50.839824   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:51.037595   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:51.305471   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:51.308222   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:51.338416   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:51.537082   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:51.806079   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:51.809149   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:51.838774   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:52.037195   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:52.236908   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:52.307438   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:52.309988   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:52.339786   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:52.539520   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:52.807714   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:52.811031   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:52.839082   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:53.037682   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:53.305629   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:53.307981   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:53.338463   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:53.537098   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:53.806021   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:53.810331   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:53.838769   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:54.091895   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:54.306715   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:54.308449   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:54.338829   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:54.540280   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:54.734396   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:54.805806   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:54.808652   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:54.838947   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:55.037868   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:55.305594   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:55.308020   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:55.338849   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:55.537911   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:55.805987   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:55.808899   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:55.839439   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:56.038492   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:56.316176   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:56.316378   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:56.340370   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:56.538344   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:56.734461   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:56.806516   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:56.809839   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:56.839171   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:57.038430   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:57.305462   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:57.307742   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:57.340252   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:57.537058   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:57.806338   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:57.808421   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:57.839125   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:58.037542   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:58.306156   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:58.307603   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:58.339349   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:58.538543   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:58.734586   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:39:58.807381   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:58.809120   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:58.908109   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:59.037847   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:59.306124   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:59.307264   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:59.338804   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:39:59.537010   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:39:59.806260   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:39:59.808807   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:39:59.839439   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:00.036904   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:00.306219   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:00.308277   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:00.339116   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:00.538595   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:00.735277   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:40:00.808141   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:00.808374   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:00.838895   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:01.037765   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:01.306325   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:01.309240   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:01.338334   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:01.540483   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:01.805905   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:01.808599   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:01.856980   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:02.038458   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:02.306037   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:02.308480   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:02.338925   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:02.537489   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:02.806720   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:02.809311   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:02.839215   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:03.038706   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:03.235095   15584 pod_ready.go:103] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"False"
	I0930 19:40:03.305605   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:03.308118   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:03.339088   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:03.537176   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:03.806049   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:03.808024   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:03.840285   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:04.047284   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:04.234184   15584 pod_ready.go:93] pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.234214   15584 pod_ready.go:82] duration metric: took 23.006434066s for pod "coredns-7c65d6cfc9-v2sl5" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.234227   15584 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.238876   15584 pod_ready.go:93] pod "etcd-addons-857381" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.238896   15584 pod_ready.go:82] duration metric: took 4.661667ms for pod "etcd-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.238905   15584 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.243161   15584 pod_ready.go:93] pod "kube-apiserver-addons-857381" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.243185   15584 pod_ready.go:82] duration metric: took 4.272909ms for pod "kube-apiserver-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.243204   15584 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.247507   15584 pod_ready.go:93] pod "kube-controller-manager-addons-857381" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.247544   15584 pod_ready.go:82] duration metric: took 4.329628ms for pod "kube-controller-manager-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.247558   15584 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wgjdg" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.253066   15584 pod_ready.go:93] pod "kube-proxy-wgjdg" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.253097   15584 pod_ready.go:82] duration metric: took 5.523ms for pod "kube-proxy-wgjdg" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.253108   15584 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.305855   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:04.308368   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:04.338826   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:04.537032   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:04.632342   15584 pod_ready.go:93] pod "kube-scheduler-addons-857381" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:04.632365   15584 pod_ready.go:82] duration metric: took 379.250879ms for pod "kube-scheduler-addons-857381" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.632374   15584 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9vf5l" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:04.805742   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:04.808493   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:04.838704   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:05.032445   15584 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9vf5l" in "kube-system" namespace has status "Ready":"True"
	I0930 19:40:05.032469   15584 pod_ready.go:82] duration metric: took 400.088015ms for pod "nvidia-device-plugin-daemonset-9vf5l" in "kube-system" namespace to be "Ready" ...
	I0930 19:40:05.032476   15584 pod_ready.go:39] duration metric: took 36.339446224s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 19:40:05.032494   15584 api_server.go:52] waiting for apiserver process to appear ...
	I0930 19:40:05.032544   15584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 19:40:05.037739   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:05.077269   15584 api_server.go:72] duration metric: took 39.20789395s to wait for apiserver process to appear ...
	I0930 19:40:05.077297   15584 api_server.go:88] waiting for apiserver healthz status ...
	I0930 19:40:05.077318   15584 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0930 19:40:05.081429   15584 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0930 19:40:05.082415   15584 api_server.go:141] control plane version: v1.31.1
	I0930 19:40:05.082441   15584 api_server.go:131] duration metric: took 5.135906ms to wait for apiserver health ...
	I0930 19:40:05.082450   15584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 19:40:05.248118   15584 system_pods.go:59] 17 kube-system pods found
	I0930 19:40:05.248151   15584 system_pods.go:61] "coredns-7c65d6cfc9-v2sl5" [7ef3332d-3ee7-4d76-bbef-2dfc99673515] Running
	I0930 19:40:05.248159   15584 system_pods.go:61] "csi-hostpath-attacher-0" [e77d98c4-0779-493d-b89f-2fbd4a41b6ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 19:40:05.248165   15584 system_pods.go:61] "csi-hostpath-resizer-0" [e32a8d15-973d-404b-9619-491fa27decc4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 19:40:05.248173   15584 system_pods.go:61] "csi-hostpathplugin-mlgws" [2f7276d7-5e87-4d2e-bd1a-6e104f3fd164] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 19:40:05.248178   15584 system_pods.go:61] "etcd-addons-857381" [74fe1626-8e74-435e-a2dd-f088265d04ac] Running
	I0930 19:40:05.248182   15584 system_pods.go:61] "kube-apiserver-addons-857381" [74358463-31fa-4b2f-ba36-4d0c4f5b03db] Running
	I0930 19:40:05.248185   15584 system_pods.go:61] "kube-controller-manager-addons-857381" [155182cf-78af-450c-923a-dfeb7b2a5358] Running
	I0930 19:40:05.248191   15584 system_pods.go:61] "kube-ingress-dns-minikube" [e1217c30-4e9c-43fa-a3f6-0a640781c5f8] Running
	I0930 19:40:05.248194   15584 system_pods.go:61] "kube-proxy-wgjdg" [b2646cb6-ecf8-4e44-9d48-b49eead7d727] Running
	I0930 19:40:05.248197   15584 system_pods.go:61] "kube-scheduler-addons-857381" [952cc18b-d292-4baa-8a03-dce05fdabe5c] Running
	I0930 19:40:05.248204   15584 system_pods.go:61] "metrics-server-84c5f94fbc-cdn25" [b344652c-decb-4b68-9eb4-dd034008cf98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 19:40:05.248207   15584 system_pods.go:61] "nvidia-device-plugin-daemonset-9vf5l" [f2848172-eec4-47cc-9e9d-36026e22b55c] Running
	I0930 19:40:05.248211   15584 system_pods.go:61] "registry-66c9cd494c-frqrv" [e66e6fb9-7274-4a0b-b787-c64abc8ffe04] Running
	I0930 19:40:05.248216   15584 system_pods.go:61] "registry-proxy-m2j7k" [cf0e9fcc-d5e3-4dd8-8337-406b07ab9495] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 19:40:05.248223   15584 system_pods.go:61] "snapshot-controller-56fcc65765-g26cx" [0a7563fa-d127-473c-b9a1-ece459d51ec0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 19:40:05.248256   15584 system_pods.go:61] "snapshot-controller-56fcc65765-vqjbn" [68d33976-a421-4696-83a7-303c2bf65ba3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 19:40:05.248264   15584 system_pods.go:61] "storage-provisioner" [cf253e6d-52dd-4bbf-a505-61269b1bb4d1] Running
	I0930 19:40:05.248271   15584 system_pods.go:74] duration metric: took 165.811366ms to wait for pod list to return data ...
	I0930 19:40:05.248282   15584 default_sa.go:34] waiting for default service account to be created ...
	I0930 19:40:05.319334   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:05.321630   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:05.349289   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:05.432684   15584 default_sa.go:45] found service account: "default"
	I0930 19:40:05.432711   15584 default_sa.go:55] duration metric: took 184.42325ms for default service account to be created ...
	I0930 19:40:05.432720   15584 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 19:40:05.537876   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:05.637325   15584 system_pods.go:86] 17 kube-system pods found
	I0930 19:40:05.637354   15584 system_pods.go:89] "coredns-7c65d6cfc9-v2sl5" [7ef3332d-3ee7-4d76-bbef-2dfc99673515] Running
	I0930 19:40:05.637363   15584 system_pods.go:89] "csi-hostpath-attacher-0" [e77d98c4-0779-493d-b89f-2fbd4a41b6ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 19:40:05.637368   15584 system_pods.go:89] "csi-hostpath-resizer-0" [e32a8d15-973d-404b-9619-491fa27decc4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 19:40:05.637376   15584 system_pods.go:89] "csi-hostpathplugin-mlgws" [2f7276d7-5e87-4d2e-bd1a-6e104f3fd164] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 19:40:05.637380   15584 system_pods.go:89] "etcd-addons-857381" [74fe1626-8e74-435e-a2dd-f088265d04ac] Running
	I0930 19:40:05.637384   15584 system_pods.go:89] "kube-apiserver-addons-857381" [74358463-31fa-4b2f-ba36-4d0c4f5b03db] Running
	I0930 19:40:05.637387   15584 system_pods.go:89] "kube-controller-manager-addons-857381" [155182cf-78af-450c-923a-dfeb7b2a5358] Running
	I0930 19:40:05.637392   15584 system_pods.go:89] "kube-ingress-dns-minikube" [e1217c30-4e9c-43fa-a3f6-0a640781c5f8] Running
	I0930 19:40:05.637395   15584 system_pods.go:89] "kube-proxy-wgjdg" [b2646cb6-ecf8-4e44-9d48-b49eead7d727] Running
	I0930 19:40:05.637399   15584 system_pods.go:89] "kube-scheduler-addons-857381" [952cc18b-d292-4baa-8a03-dce05fdabe5c] Running
	I0930 19:40:05.637405   15584 system_pods.go:89] "metrics-server-84c5f94fbc-cdn25" [b344652c-decb-4b68-9eb4-dd034008cf98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 19:40:05.637410   15584 system_pods.go:89] "nvidia-device-plugin-daemonset-9vf5l" [f2848172-eec4-47cc-9e9d-36026e22b55c] Running
	I0930 19:40:05.637416   15584 system_pods.go:89] "registry-66c9cd494c-frqrv" [e66e6fb9-7274-4a0b-b787-c64abc8ffe04] Running
	I0930 19:40:05.637423   15584 system_pods.go:89] "registry-proxy-m2j7k" [cf0e9fcc-d5e3-4dd8-8337-406b07ab9495] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 19:40:05.637433   15584 system_pods.go:89] "snapshot-controller-56fcc65765-g26cx" [0a7563fa-d127-473c-b9a1-ece459d51ec0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 19:40:05.637446   15584 system_pods.go:89] "snapshot-controller-56fcc65765-vqjbn" [68d33976-a421-4696-83a7-303c2bf65ba3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 19:40:05.637453   15584 system_pods.go:89] "storage-provisioner" [cf253e6d-52dd-4bbf-a505-61269b1bb4d1] Running
	I0930 19:40:05.637460   15584 system_pods.go:126] duration metric: took 204.735253ms to wait for k8s-apps to be running ...
	I0930 19:40:05.637471   15584 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 19:40:05.637512   15584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 19:40:05.664635   15584 system_svc.go:56] duration metric: took 27.157381ms WaitForService to wait for kubelet
	I0930 19:40:05.664667   15584 kubeadm.go:582] duration metric: took 39.795308561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 19:40:05.664684   15584 node_conditions.go:102] verifying NodePressure condition ...
	I0930 19:40:05.806621   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:05.809736   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:05.833501   15584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 19:40:05.833531   15584 node_conditions.go:123] node cpu capacity is 2
	I0930 19:40:05.833544   15584 node_conditions.go:105] duration metric: took 168.855642ms to run NodePressure ...
	I0930 19:40:05.833558   15584 start.go:241] waiting for startup goroutines ...
	I0930 19:40:05.838853   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:06.201378   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:06.305678   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:06.309215   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:06.338426   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:06.537088   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:06.805556   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:06.807670   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:06.837888   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:07.037594   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:07.306997   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:07.308373   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:07.339605   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:07.537323   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:07.806225   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:07.808962   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:07.840424   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:08.038714   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:08.315435   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:08.316984   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:08.338567   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:08.539077   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:08.806404   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:08.807794   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:08.838111   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:09.039411   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:09.306781   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:09.308706   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:09.338817   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:09.541907   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:09.806151   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:09.808679   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:09.839864   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:10.037757   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:10.306476   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:10.309294   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:10.338729   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:10.537365   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:10.806186   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:10.808553   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:10.838954   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:11.038197   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:11.305362   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:11.307868   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:11.338450   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:11.537023   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:11.805980   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:11.807997   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:11.838687   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:12.038101   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:12.305891   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:12.308058   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:12.338527   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:12.537006   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:12.805026   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:12.807440   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:12.838745   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:13.036973   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:13.316029   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:13.316819   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:13.339318   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:13.537656   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:13.806393   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:13.809221   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:13.838943   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:14.036710   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:14.305575   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:14.307510   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:14.339024   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:14.746118   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:14.805546   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:14.808182   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:14.839255   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:15.038456   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:15.306259   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:15.308763   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:15.338218   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:15.537663   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:15.806502   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:15.809322   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:15.838920   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:16.038201   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:16.305842   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:16.308119   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:16.338442   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:16.536865   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:16.806565   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:16.809083   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:16.839057   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:17.037476   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:17.306218   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:17.308220   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:17.338656   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:17.538612   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:17.806377   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:17.808904   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:17.838105   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:18.037920   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:18.306007   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:18.308381   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:18.338711   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:18.537393   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:18.806335   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:18.809582   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:18.840209   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:19.036945   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:19.306469   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:19.308307   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:19.338954   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:19.537674   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:19.806934   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:19.808546   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:19.839444   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:20.037215   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:20.305907   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:20.308689   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:20.339344   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:20.538374   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:20.808450   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:20.808767   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:20.839145   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:21.037658   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:21.306332   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:21.310114   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:21.341224   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:21.537216   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:21.806169   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:21.808637   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:21.842275   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:22.038267   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:22.305922   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:22.308301   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:22.342967   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:22.537729   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:22.810668   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 19:40:22.811005   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:22.839120   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:23.037454   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:23.306993   15584 kapi.go:107] duration metric: took 49.005242803s to wait for kubernetes.io/minikube-addons=registry ...
	I0930 19:40:23.308292   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:23.340880   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:23.537538   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:23.808649   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:23.838719   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:24.037027   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:24.311020   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:24.339930   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:24.537448   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:24.808165   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:24.840330   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:25.038012   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:25.310485   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:25.338594   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:25.537562   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:25.808768   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:25.840491   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:26.337884   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:26.339802   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:26.342878   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:26.538146   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:26.810441   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:26.911692   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:27.037138   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:27.307981   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:27.338514   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:27.537541   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:27.808034   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:27.838767   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:28.037949   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:28.315914   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:28.346567   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:28.539119   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:28.808853   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:28.838437   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:29.036989   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:29.308729   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:29.339702   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:29.537814   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:29.808942   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:29.841777   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:30.038084   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:30.307636   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:30.339110   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:30.538667   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:30.808685   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:30.838911   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:31.037786   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:31.309187   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:31.338193   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:31.538062   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:31.810154   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:31.844570   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:32.036891   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:32.309059   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:32.338920   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:32.538629   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:32.811819   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:32.840003   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:33.298376   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:33.314136   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:33.405537   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:33.536782   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:33.810211   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:33.838557   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:34.038758   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:34.308572   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:34.338993   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:34.538664   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:34.809265   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:34.838824   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:35.038820   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:35.309811   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:35.338667   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:35.538473   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:35.809185   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:35.840427   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:36.037848   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:36.309172   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:36.344741   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:36.537522   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:36.815421   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:36.846933   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:37.038118   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:37.307913   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:37.339870   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:37.545907   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:37.809630   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:37.838804   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:38.036948   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:38.319878   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:38.342775   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:38.537998   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:38.809824   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:38.915083   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:39.041765   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:39.309331   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:39.342044   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:39.537640   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:39.808078   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:39.838346   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:40.036732   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:40.309104   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:40.338364   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 19:40:40.544312   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:40.808442   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:40.909737   15584 kapi.go:107] duration metric: took 1m5.075684221s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0930 19:40:41.037117   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:41.307717   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:41.538444   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:41.808544   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:42.037764   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:42.308953   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:42.538432   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:42.808497   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:43.038173   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:43.309165   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:43.537280   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:43.808012   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:44.037523   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:44.308211   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:45.043029   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:45.043273   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:45.047140   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:45.308014   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:45.537537   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:45.808735   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:46.037888   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:46.309235   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:46.537513   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:46.808314   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:47.038548   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:47.308644   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:47.538083   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:47.807931   15584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 19:40:48.038183   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:48.308144   15584 kapi.go:107] duration metric: took 1m14.004175846s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0930 19:40:48.538107   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:49.038498   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:49.537789   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:50.038155   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:50.613944   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:51.038032   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:51.537506   15584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 19:40:52.040616   15584 kapi.go:107] duration metric: took 1m14.506956805s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0930 19:40:52.041976   15584 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-857381 cluster.
	I0930 19:40:52.043243   15584 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0930 19:40:52.044410   15584 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0930 19:40:52.045758   15584 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0930 19:40:52.046831   15584 addons.go:510] duration metric: took 1m26.177460547s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0930 19:40:52.046869   15584 start.go:246] waiting for cluster config update ...
	I0930 19:40:52.046883   15584 start.go:255] writing updated cluster config ...
	I0930 19:40:52.047117   15584 ssh_runner.go:195] Run: rm -f paused
	I0930 19:40:52.098683   15584 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 19:40:52.100271   15584 out.go:177] * Done! kubectl is now configured to use "addons-857381" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.748704134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726067748672464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1ce228e-1fca-4bad-80c8-cd66ce036b5e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.749431284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82b794b9-8e40-4fca-aad9-94965264fba2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.749645924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82b794b9-8e40-4fca-aad9-94965264fba2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.749907464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d877c2fce2507c1237bb109f59dd388cf7efbb0f76a1402526c779fe7140764,PodSandboxId:5f918ee4dd435117ab962a7aba5a72be46d9c77da93ecebd3656ecafc581b67e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727725955386657028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-g2hjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ba8083f-a0ac-459b-8296-63da132aaac1,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379432eba48bb0fbab152d3f1013d9b37a95e87e739158fb313fa0b78ff8e264,PodSandboxId:7186dc43443428dc9dd097d0de0b6842c2db7d0aa646939da7dfdcaa6c1fd4a9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727725814733655838,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b659e53f-9c5e-499b-b386-a5be26a79083,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef,PodSandboxId:2927b71f84ff3f76f3a52a1aecbd72a68cfa19e0cdca879f3210c117c839294f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727725251528262837,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-scvnm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 5e438281-5451-4290-8c50-14fb79a66185,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013,PodSandboxId:5d866c50845926549f01df87a9908307213fc5caa20603d75bdd4c898c23d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727725209633050557,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cdn25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b344652c-decb-4b68-9eb4-dd034008cf98,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab,PodSandboxId:44e738ed93b01a10a8ff2fe7b585def59079d101143e4555486329cd7fcc73b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727725171524308003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf253e6d-52dd-4bbf-a505-61269b1bb4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec,PodSandboxId:7264dffbc56c756580b1699b46a98d026060043f7ded85528176c4468f3e54d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727725169
673865152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2sl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ef3332d-3ee7-4d76-bbef-2dfc99673515,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a,PodSandboxId:cbd8bbc0b830527874fdbef734642c050e7e6a62986ee8cdf383f82424b3b1c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727725167873622399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wgjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2646cb6-ecf8-4e44-9d48-b49eead7d727,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67,PodSandboxId:472730560a69cb865a7de097b81e5d7c46896bf3dfef03d491afa5c9add05b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727725156408359954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509234ffc60223733ef52b2009dbce73,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb,PodSandboxId:45990caa9ec749761565324cc3ffda13e0181f617a83701013fa0c2c91467ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727725156391153567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462c1efc125130690ce0abe7c0d6a433,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377,PodSandboxId:f599de907322667aeed83b2705fea682b338d49da5ee13de1790e02e7e4e8a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727725156395714900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c22ddcce59702bad76d277171c4f1a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e,PodSandboxId:329303fea433cc4c43cb1ec6a4a7d52fafbb483b77613fefca8466b49fcac7b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727725156374738044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aaf74d96d0249f06846b94c74ecc9cd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82b794b9-8e40-4fca-aad9-94965264fba2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.790144463Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc34fd22-5508-4649-9950-bcba86889359 name=/runtime.v1.RuntimeService/Version
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.790318250Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc34fd22-5508-4649-9950-bcba86889359 name=/runtime.v1.RuntimeService/Version
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.792005982Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8a03b5a-2f05-4020-a039-e45237a751ce name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.793585880Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726067793557259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8a03b5a-2f05-4020-a039-e45237a751ce name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.794790948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1cd11ec-7278-48fc-9f0e-a4fca3c931c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.794852971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1cd11ec-7278-48fc-9f0e-a4fca3c931c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.795149065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d877c2fce2507c1237bb109f59dd388cf7efbb0f76a1402526c779fe7140764,PodSandboxId:5f918ee4dd435117ab962a7aba5a72be46d9c77da93ecebd3656ecafc581b67e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727725955386657028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-g2hjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ba8083f-a0ac-459b-8296-63da132aaac1,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379432eba48bb0fbab152d3f1013d9b37a95e87e739158fb313fa0b78ff8e264,PodSandboxId:7186dc43443428dc9dd097d0de0b6842c2db7d0aa646939da7dfdcaa6c1fd4a9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727725814733655838,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b659e53f-9c5e-499b-b386-a5be26a79083,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef,PodSandboxId:2927b71f84ff3f76f3a52a1aecbd72a68cfa19e0cdca879f3210c117c839294f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727725251528262837,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-scvnm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 5e438281-5451-4290-8c50-14fb79a66185,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013,PodSandboxId:5d866c50845926549f01df87a9908307213fc5caa20603d75bdd4c898c23d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727725209633050557,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cdn25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b344652c-decb-4b68-9eb4-dd034008cf98,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab,PodSandboxId:44e738ed93b01a10a8ff2fe7b585def59079d101143e4555486329cd7fcc73b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727725171524308003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf253e6d-52dd-4bbf-a505-61269b1bb4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec,PodSandboxId:7264dffbc56c756580b1699b46a98d026060043f7ded85528176c4468f3e54d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727725169
673865152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2sl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ef3332d-3ee7-4d76-bbef-2dfc99673515,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a,PodSandboxId:cbd8bbc0b830527874fdbef734642c050e7e6a62986ee8cdf383f82424b3b1c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727725167873622399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wgjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2646cb6-ecf8-4e44-9d48-b49eead7d727,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67,PodSandboxId:472730560a69cb865a7de097b81e5d7c46896bf3dfef03d491afa5c9add05b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727725156408359954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509234ffc60223733ef52b2009dbce73,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb,PodSandboxId:45990caa9ec749761565324cc3ffda13e0181f617a83701013fa0c2c91467ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727725156391153567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462c1efc125130690ce0abe7c0d6a433,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377,PodSandboxId:f599de907322667aeed83b2705fea682b338d49da5ee13de1790e02e7e4e8a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727725156395714900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c22ddcce59702bad76d277171c4f1a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e,PodSandboxId:329303fea433cc4c43cb1ec6a4a7d52fafbb483b77613fefca8466b49fcac7b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727725156374738044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aaf74d96d0249f06846b94c74ecc9cd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1cd11ec-7278-48fc-9f0e-a4fca3c931c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.831102200Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a002f095-670e-42cb-acc6-c8e95a7b522e name=/runtime.v1.RuntimeService/Version
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.831191623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a002f095-670e-42cb-acc6-c8e95a7b522e name=/runtime.v1.RuntimeService/Version
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.832930529Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d6c3b02-bc5a-4abd-ac31-3826e7a266c0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.834075608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726067834039703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d6c3b02-bc5a-4abd-ac31-3826e7a266c0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.834770993Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d14a9f8d-a0c8-4714-82ad-a90f4fe0697c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.834832151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d14a9f8d-a0c8-4714-82ad-a90f4fe0697c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.835083687Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d877c2fce2507c1237bb109f59dd388cf7efbb0f76a1402526c779fe7140764,PodSandboxId:5f918ee4dd435117ab962a7aba5a72be46d9c77da93ecebd3656ecafc581b67e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727725955386657028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-g2hjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ba8083f-a0ac-459b-8296-63da132aaac1,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379432eba48bb0fbab152d3f1013d9b37a95e87e739158fb313fa0b78ff8e264,PodSandboxId:7186dc43443428dc9dd097d0de0b6842c2db7d0aa646939da7dfdcaa6c1fd4a9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727725814733655838,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b659e53f-9c5e-499b-b386-a5be26a79083,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef,PodSandboxId:2927b71f84ff3f76f3a52a1aecbd72a68cfa19e0cdca879f3210c117c839294f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727725251528262837,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-scvnm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 5e438281-5451-4290-8c50-14fb79a66185,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013,PodSandboxId:5d866c50845926549f01df87a9908307213fc5caa20603d75bdd4c898c23d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727725209633050557,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cdn25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b344652c-decb-4b68-9eb4-dd034008cf98,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab,PodSandboxId:44e738ed93b01a10a8ff2fe7b585def59079d101143e4555486329cd7fcc73b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727725171524308003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf253e6d-52dd-4bbf-a505-61269b1bb4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec,PodSandboxId:7264dffbc56c756580b1699b46a98d026060043f7ded85528176c4468f3e54d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727725169
673865152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2sl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ef3332d-3ee7-4d76-bbef-2dfc99673515,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a,PodSandboxId:cbd8bbc0b830527874fdbef734642c050e7e6a62986ee8cdf383f82424b3b1c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727725167873622399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wgjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2646cb6-ecf8-4e44-9d48-b49eead7d727,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67,PodSandboxId:472730560a69cb865a7de097b81e5d7c46896bf3dfef03d491afa5c9add05b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727725156408359954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509234ffc60223733ef52b2009dbce73,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb,PodSandboxId:45990caa9ec749761565324cc3ffda13e0181f617a83701013fa0c2c91467ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727725156391153567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462c1efc125130690ce0abe7c0d6a433,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377,PodSandboxId:f599de907322667aeed83b2705fea682b338d49da5ee13de1790e02e7e4e8a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727725156395714900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c22ddcce59702bad76d277171c4f1a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e,PodSandboxId:329303fea433cc4c43cb1ec6a4a7d52fafbb483b77613fefca8466b49fcac7b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727725156374738044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aaf74d96d0249f06846b94c74ecc9cd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d14a9f8d-a0c8-4714-82ad-a90f4fe0697c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.874384446Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e1840ea-e6c7-49fa-afec-7eb85661be91 name=/runtime.v1.RuntimeService/Version
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.874514297Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e1840ea-e6c7-49fa-afec-7eb85661be91 name=/runtime.v1.RuntimeService/Version
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.875633477Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0dc59403-8daa-436d-b589-0713d75d9aaf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.876774971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726067876749267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0dc59403-8daa-436d-b589-0713d75d9aaf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.877552629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b02445be-d845-4f6e-a53d-e33e583b9621 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.877635417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b02445be-d845-4f6e-a53d-e33e583b9621 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 19:54:27 addons-857381 crio[658]: time="2024-09-30 19:54:27.877885820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d877c2fce2507c1237bb109f59dd388cf7efbb0f76a1402526c779fe7140764,PodSandboxId:5f918ee4dd435117ab962a7aba5a72be46d9c77da93ecebd3656ecafc581b67e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727725955386657028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-g2hjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ba8083f-a0ac-459b-8296-63da132aaac1,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379432eba48bb0fbab152d3f1013d9b37a95e87e739158fb313fa0b78ff8e264,PodSandboxId:7186dc43443428dc9dd097d0de0b6842c2db7d0aa646939da7dfdcaa6c1fd4a9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727725814733655838,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b659e53f-9c5e-499b-b386-a5be26a79083,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef,PodSandboxId:2927b71f84ff3f76f3a52a1aecbd72a68cfa19e0cdca879f3210c117c839294f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727725251528262837,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-scvnm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 5e438281-5451-4290-8c50-14fb79a66185,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013,PodSandboxId:5d866c50845926549f01df87a9908307213fc5caa20603d75bdd4c898c23d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727725209633050557,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cdn25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b344652c-decb-4b68-9eb4-dd034008cf98,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab,PodSandboxId:44e738ed93b01a10a8ff2fe7b585def59079d101143e4555486329cd7fcc73b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727725171524308003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf253e6d-52dd-4bbf-a505-61269b1bb4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec,PodSandboxId:7264dffbc56c756580b1699b46a98d026060043f7ded85528176c4468f3e54d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727725169
673865152,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2sl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ef3332d-3ee7-4d76-bbef-2dfc99673515,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a,PodSandboxId:cbd8bbc0b830527874fdbef734642c050e7e6a62986ee8cdf383f82424b3b1c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727725167873622399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wgjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2646cb6-ecf8-4e44-9d48-b49eead7d727,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67,PodSandboxId:472730560a69cb865a7de097b81e5d7c46896bf3dfef03d491afa5c9add05b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727725156408359954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509234ffc60223733ef52b2009dbce73,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb,PodSandboxId:45990caa9ec749761565324cc3ffda13e0181f617a83701013fa0c2c91467ca5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727725156391153567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462c1efc125130690ce0abe7c0d6a433,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377,PodSandboxId:f599de907322667aeed83b2705fea682b338d49da5ee13de1790e02e7e4e8a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727725156395714900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c22ddcce59702bad76d277171c4f1a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e,PodSandboxId:329303fea433cc4c43cb1ec6a4a7d52fafbb483b77613fefca8466b49fcac7b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727725156374738044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-857381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aaf74d96d0249f06846b94c74ecc9cd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b02445be-d845-4f6e-a53d-e33e583b9621 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3d877c2fce250       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   5f918ee4dd435       hello-world-app-55bf9c44b4-g2hjs
	379432eba48bb       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         4 minutes ago        Running             nginx                     0                   7186dc4344342       nginx
	0a550a25e9f7b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            13 minutes ago       Running             gcp-auth                  0                   2927b71f84ff3       gcp-auth-89d5ffd79-scvnm
	c6b2eb356f364       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago       Running             metrics-server            0                   5d866c5084592       metrics-server-84c5f94fbc-cdn25
	34fdddbc2729c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago       Running             storage-provisioner       0                   44e738ed93b01       storage-provisioner
	8a2f669f59ff8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        14 minutes ago       Running             coredns                   0                   7264dffbc56c7       coredns-7c65d6cfc9-v2sl5
	cd4a5712da231       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago       Running             kube-proxy                0                   cbd8bbc0b8305       kube-proxy-wgjdg
	611b55895a7c3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago       Running             etcd                      0                   472730560a69c       etcd-addons-857381
	f054c208a5bd0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago       Running             kube-controller-manager   0                   f599de9073226       kube-controller-manager-addons-857381
	0f613c2d90480       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago       Running             kube-scheduler            0                   45990caa9ec74       kube-scheduler-addons-857381
	e6ba6b23751a3       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago       Running             kube-apiserver            0                   329303fea433c       kube-apiserver-addons-857381
	
	
	==> coredns [8a2f669f59ff8429d81fb4f5162e27ce06e17473d4605e0d1412e6b895b9ffec] <==
	[INFO] 127.0.0.1:57266 - 46113 "HINFO IN 4563711597832070733.7464152516972830378. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012863189s
	[INFO] 10.244.0.7:41266 - 20553 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000327187s
	[INFO] 10.244.0.7:41266 - 47123 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.0007627s
	[INFO] 10.244.0.7:41266 - 44256 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000120493s
	[INFO] 10.244.0.7:41266 - 8839 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000082266s
	[INFO] 10.244.0.7:41266 - 45651 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000085479s
	[INFO] 10.244.0.7:41266 - 55882 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000231828s
	[INFO] 10.244.0.7:41266 - 16528 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000127235s
	[INFO] 10.244.0.7:41266 - 22884 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000079062s
	[INFO] 10.244.0.7:58608 - 46632 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093178s
	[INFO] 10.244.0.7:58608 - 46894 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000048081s
	[INFO] 10.244.0.7:53470 - 3911 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066274s
	[INFO] 10.244.0.7:53470 - 3656 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000054504s
	[INFO] 10.244.0.7:34130 - 26559 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059796s
	[INFO] 10.244.0.7:34130 - 26354 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043427s
	[INFO] 10.244.0.7:40637 - 48484 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000044485s
	[INFO] 10.244.0.7:40637 - 48313 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050997s
	[INFO] 10.244.0.21:43040 - 43581 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00046625s
	[INFO] 10.244.0.21:55023 - 19308 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000074371s
	[INFO] 10.244.0.21:45685 - 26448 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122686s
	[INFO] 10.244.0.21:43520 - 19830 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000076449s
	[INFO] 10.244.0.21:37619 - 36517 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132562s
	[INFO] 10.244.0.21:43029 - 472 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000279272s
	[INFO] 10.244.0.21:58516 - 17196 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002205188s
	[INFO] 10.244.0.21:42990 - 49732 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002642341s
	
	
	==> describe nodes <==
	Name:               addons-857381
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-857381
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=addons-857381
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T19_39_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-857381
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 19:39:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-857381
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 19:54:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 19:52:58 +0000   Mon, 30 Sep 2024 19:39:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 19:52:58 +0000   Mon, 30 Sep 2024 19:39:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 19:52:58 +0000   Mon, 30 Sep 2024 19:39:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 19:52:58 +0000   Mon, 30 Sep 2024 19:39:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    addons-857381
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 25d9982bd002458384094f49961bbdf8
	  System UUID:                25d9982b-d002-4583-8409-4f49961bbdf8
	  Boot ID:                    b5f01af6-3227-4822-ba41-5ad95d8a7eaf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-g2hjs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  gcp-auth                    gcp-auth-89d5ffd79-scvnm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-v2sl5                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-857381                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-857381             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-857381    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-wgjdg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-857381             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node addons-857381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node addons-857381 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node addons-857381 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m   kubelet          Node addons-857381 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node addons-857381 event: Registered Node addons-857381 in Controller
	
	
	==> dmesg <==
	[  +0.986942] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.027017] kauditd_printk_skb: 123 callbacks suppressed
	[  +5.125991] kauditd_printk_skb: 110 callbacks suppressed
	[ +10.689942] kauditd_printk_skb: 62 callbacks suppressed
	[Sep30 19:40] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.674801] kauditd_printk_skb: 24 callbacks suppressed
	[ +12.773296] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.640929] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.302122] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.224814] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.475445] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.472390] kauditd_printk_skb: 6 callbacks suppressed
	[Sep30 19:41] kauditd_printk_skb: 6 callbacks suppressed
	[Sep30 19:49] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.016133] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.573920] kauditd_printk_skb: 13 callbacks suppressed
	[ +17.576553] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.137626] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.049333] kauditd_printk_skb: 15 callbacks suppressed
	[  +9.481275] kauditd_printk_skb: 64 callbacks suppressed
	[Sep30 19:50] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.792200] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.011966] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.478236] kauditd_printk_skb: 3 callbacks suppressed
	[Sep30 19:52] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [611b55895a7c3a5335fbb46b041625f86ca6d6031352bcde4b032dab9de47e67] <==
	{"level":"info","ts":"2024-09-30T19:40:45.024367Z","caller":"traceutil/trace.go:171","msg":"trace[665645630] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1080; }","duration":"268.162297ms","start":"2024-09-30T19:40:44.756192Z","end":"2024-09-30T19:40:45.024355Z","steps":["trace[665645630] 'range keys from in-memory index tree'  (duration: 267.637437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.024639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.464576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.024814Z","caller":"traceutil/trace.go:171","msg":"trace[1197247651] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1080; }","duration":"228.698131ms","start":"2024-09-30T19:40:44.795971Z","end":"2024-09-30T19:40:45.024669Z","steps":["trace[1197247651] 'range keys from in-memory index tree'  (duration: 228.42242ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.024764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"509.83424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.024932Z","caller":"traceutil/trace.go:171","msg":"trace[1982350029] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1080; }","duration":"510.003594ms","start":"2024-09-30T19:40:44.514921Z","end":"2024-09-30T19:40:45.024925Z","steps":["trace[1982350029] 'count revisions from in-memory index tree'  (duration: 509.784533ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.024960Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T19:40:44.514884Z","time spent":"510.067329ms","remote":"127.0.0.1:40802","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true "}
	{"level":"warn","ts":"2024-09-30T19:40:45.025722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.967655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.025980Z","caller":"traceutil/trace.go:171","msg":"trace[1921436978] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:1080; }","duration":"103.205459ms","start":"2024-09-30T19:40:44.922740Z","end":"2024-09-30T19:40:45.025946Z","steps":["trace[1921436978] 'count revisions from in-memory index tree'  (duration: 102.824591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.027664Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"503.881417ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:40:45.027743Z","caller":"traceutil/trace.go:171","msg":"trace[1637850638] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1080; }","duration":"503.963128ms","start":"2024-09-30T19:40:44.523772Z","end":"2024-09-30T19:40:45.027735Z","steps":["trace[1637850638] 'range keys from in-memory index tree'  (duration: 503.748159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:40:45.027813Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T19:40:44.523734Z","time spent":"504.023771ms","remote":"127.0.0.1:40756","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-30T19:49:17.549046Z","caller":"traceutil/trace.go:171","msg":"trace[477247537] linearizableReadLoop","detail":"{readStateIndex:2110; appliedIndex:2109; }","duration":"332.343416ms","start":"2024-09-30T19:49:17.216678Z","end":"2024-09-30T19:49:17.549021Z","steps":["trace[477247537] 'read index received'  (duration: 332.162445ms)","trace[477247537] 'applied index is now lower than readState.Index'  (duration: 180.324µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T19:49:17.549230Z","caller":"traceutil/trace.go:171","msg":"trace[487588167] transaction","detail":"{read_only:false; response_revision:1964; number_of_response:1; }","duration":"416.883354ms","start":"2024-09-30T19:49:17.132337Z","end":"2024-09-30T19:49:17.549220Z","steps":["trace[487588167] 'process raft request'  (duration: 416.547999ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:49:17.549391Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T19:49:17.132321Z","time spent":"416.931927ms","remote":"127.0.0.1:40530","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":25,"response count":0,"response size":39,"request content":"compare:<key:\"compact_rev_key\" version:1 > success:<request_put:<key:\"compact_rev_key\" value_size:4 >> failure:<request_range:<key:\"compact_rev_key\" > >"}
	{"level":"warn","ts":"2024-09-30T19:49:17.549718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.534401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T19:49:17.550030Z","caller":"traceutil/trace.go:171","msg":"trace[1640880640] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1964; }","duration":"189.856846ms","start":"2024-09-30T19:49:17.360156Z","end":"2024-09-30T19:49:17.550013Z","steps":["trace[1640880640] 'agreement among raft nodes before linearized reading'  (duration: 189.426494ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:49:17.549806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.130902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/headlamp\" ","response":"range_response_count:1 size:596"}
	{"level":"info","ts":"2024-09-30T19:49:17.550375Z","caller":"traceutil/trace.go:171","msg":"trace[30748080] range","detail":"{range_begin:/registry/namespaces/headlamp; range_end:; response_count:1; response_revision:1964; }","duration":"333.699236ms","start":"2024-09-30T19:49:17.216666Z","end":"2024-09-30T19:49:17.550366Z","steps":["trace[30748080] 'agreement among raft nodes before linearized reading'  (duration: 333.066743ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T19:49:17.550527Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T19:49:17.216625Z","time spent":"333.888235ms","remote":"127.0.0.1:40674","response type":"/etcdserverpb.KV/Range","request count":0,"request size":31,"response count":1,"response size":619,"request content":"key:\"/registry/namespaces/headlamp\" "}
	{"level":"info","ts":"2024-09-30T19:49:17.560569Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1467}
	{"level":"info","ts":"2024-09-30T19:49:17.674282Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1467,"took":"113.151678ms","hash":2336021825,"current-db-size-bytes":6635520,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3395584,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-30T19:49:17.674799Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2336021825,"revision":1467,"compact-revision":-1}
	{"level":"info","ts":"2024-09-30T19:54:17.571064Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1964}
	{"level":"info","ts":"2024-09-30T19:54:17.593577Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1964,"took":"21.851065ms","hash":789509696,"current-db-size-bytes":6635520,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":4939776,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-30T19:54:17.593684Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":789509696,"revision":1964,"compact-revision":1467}
	
	
	==> gcp-auth [0a550a25e9f7b3586687046f535b548383c78708b97eaeed7576b35b5dcee1ef] <==
	2024/09/30 19:40:55 Ready to write response ...
	2024/09/30 19:40:55 Ready to marshal response ...
	2024/09/30 19:40:55 Ready to write response ...
	2024/09/30 19:48:58 Ready to marshal response ...
	2024/09/30 19:48:58 Ready to write response ...
	2024/09/30 19:48:58 Ready to marshal response ...
	2024/09/30 19:48:58 Ready to write response ...
	2024/09/30 19:48:58 Ready to marshal response ...
	2024/09/30 19:48:58 Ready to write response ...
	2024/09/30 19:49:08 Ready to marshal response ...
	2024/09/30 19:49:08 Ready to write response ...
	2024/09/30 19:49:10 Ready to marshal response ...
	2024/09/30 19:49:10 Ready to write response ...
	2024/09/30 19:49:35 Ready to marshal response ...
	2024/09/30 19:49:35 Ready to write response ...
	2024/09/30 19:49:35 Ready to marshal response ...
	2024/09/30 19:49:35 Ready to write response ...
	2024/09/30 19:49:38 Ready to marshal response ...
	2024/09/30 19:49:38 Ready to write response ...
	2024/09/30 19:49:48 Ready to marshal response ...
	2024/09/30 19:49:48 Ready to write response ...
	2024/09/30 19:50:12 Ready to marshal response ...
	2024/09/30 19:50:12 Ready to write response ...
	2024/09/30 19:52:32 Ready to marshal response ...
	2024/09/30 19:52:32 Ready to write response ...
	
	
	==> kernel <==
	 19:54:28 up 15 min,  0 users,  load average: 0.08, 0.41, 0.42
	Linux addons-857381 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e6ba6b23751a363846407405c025305c70dc80dbf68869142a0ee6929093b01e] <==
	E0930 19:50:00.209878       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:01.218394       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:02.236227       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:03.254865       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:04.268899       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:04.476888       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:05.276046       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:06.287284       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:07.295293       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:08.316036       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0930 19:50:08.612418       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0930 19:50:09.323889       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	W0930 19:50:09.653652       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0930 19:50:10.334217       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:11.343087       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0930 19:50:12.081198       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0930 19:50:12.262496       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.189.44"}
	E0930 19:50:12.352591       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:13.361386       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:14.369899       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:15.377579       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:16.384628       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:17.392881       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 19:50:18.400366       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0930 19:52:32.708585       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.184.74"}
	
	
	==> kube-controller-manager [f054c208a5bd0eb1494d0e174024a758694fd0eca27fb153e9b6b1ba005ff377] <==
	I0930 19:52:34.471094       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="8.574µs"
	I0930 19:52:34.479024       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0930 19:52:35.834624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.755927ms"
	I0930 19:52:35.834865       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="69.612µs"
	I0930 19:52:44.606016       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0930 19:52:46.788798       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:52:46.788839       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:52:50.312021       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:52:50.312071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 19:52:58.459112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-857381"
	W0930 19:53:02.294637       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:53:02.294809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:53:18.867071       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:53:18.867334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:53:37.577094       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:53:37.577267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:53:41.538597       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:53:41.538648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:53:47.696323       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:53:47.696383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:53:56.145746       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:53:56.145800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 19:54:24.069433       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 19:54:24.069660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 19:54:26.830051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="10.132µs"
	
	
	==> kube-proxy [cd4a5712da231889676b696f91670decbc5f5f8c36b118a9dc265d962f5d249a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 19:39:29.990587       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 19:39:30.058676       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	E0930 19:39:30.058750       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 19:39:30.362730       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 19:39:30.362795       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 19:39:30.362820       1 server_linux.go:169] "Using iptables Proxier"
	I0930 19:39:30.416095       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 19:39:30.416411       1 server.go:483] "Version info" version="v1.31.1"
	I0930 19:39:30.416479       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 19:39:30.470892       1 config.go:199] "Starting service config controller"
	I0930 19:39:30.470932       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 19:39:30.470961       1 config.go:105] "Starting endpoint slice config controller"
	I0930 19:39:30.470965       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 19:39:30.471620       1 config.go:328] "Starting node config controller"
	I0930 19:39:30.471641       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 19:39:30.571571       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 19:39:30.571587       1 shared_informer.go:320] Caches are synced for service config
	I0930 19:39:30.573064       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f613c2d90480ee1ae214e03080c452973dd772a7c6f008a8764350f7e1943eb] <==
	E0930 19:39:18.783718       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0930 19:39:18.783738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:18.783806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 19:39:18.783818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.639835       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 19:39:19.639943       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 19:39:19.654740       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 19:39:19.654792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.667324       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 19:39:19.667422       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.774980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 19:39:19.775022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.818960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 19:39:19.819059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.876197       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 19:39:19.876273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.888046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 19:39:19.888095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.898349       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 19:39:19.898413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:19.915746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 19:39:19.915953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 19:39:20.008659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0930 19:39:20.008707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0930 19:39:21.870985       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 19:53:58 addons-857381 kubelet[1195]: E0930 19:53:58.405544    1195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7bbe2897-cb73-4ed6-a221-bebc8545e1cc"
	Sep 30 19:54:01 addons-857381 kubelet[1195]: E0930 19:54:01.801731    1195 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726041801313406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 19:54:01 addons-857381 kubelet[1195]: E0930 19:54:01.801764    1195 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726041801313406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 19:54:09 addons-857381 kubelet[1195]: E0930 19:54:09.404359    1195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7bbe2897-cb73-4ed6-a221-bebc8545e1cc"
	Sep 30 19:54:11 addons-857381 kubelet[1195]: E0930 19:54:11.803967    1195 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726051803587222,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 19:54:11 addons-857381 kubelet[1195]: E0930 19:54:11.804022    1195 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726051803587222,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 19:54:21 addons-857381 kubelet[1195]: E0930 19:54:21.424328    1195 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 19:54:21 addons-857381 kubelet[1195]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 19:54:21 addons-857381 kubelet[1195]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 19:54:21 addons-857381 kubelet[1195]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 19:54:21 addons-857381 kubelet[1195]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 19:54:21 addons-857381 kubelet[1195]: E0930 19:54:21.806594    1195 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726061806199303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 19:54:21 addons-857381 kubelet[1195]: E0930 19:54:21.806638    1195 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726061806199303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 19:54:22 addons-857381 kubelet[1195]: E0930 19:54:22.404744    1195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7bbe2897-cb73-4ed6-a221-bebc8545e1cc"
	Sep 30 19:54:26 addons-857381 kubelet[1195]: I0930 19:54:26.862721    1195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-g2hjs" podStartSLOduration=112.581284564 podStartE2EDuration="1m54.862686906s" podCreationTimestamp="2024-09-30 19:52:32 +0000 UTC" firstStartedPulling="2024-09-30 19:52:33.093334849 +0000 UTC m=+791.805421507" lastFinishedPulling="2024-09-30 19:52:35.37473719 +0000 UTC m=+794.086823849" observedRunningTime="2024-09-30 19:52:35.82728572 +0000 UTC m=+794.539372394" watchObservedRunningTime="2024-09-30 19:54:26.862686906 +0000 UTC m=+905.574773575"
	Sep 30 19:54:28 addons-857381 kubelet[1195]: I0930 19:54:28.177851    1195 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b344652c-decb-4b68-9eb4-dd034008cf98-tmp-dir\") pod \"b344652c-decb-4b68-9eb4-dd034008cf98\" (UID: \"b344652c-decb-4b68-9eb4-dd034008cf98\") "
	Sep 30 19:54:28 addons-857381 kubelet[1195]: I0930 19:54:28.177897    1195 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66xt6\" (UniqueName: \"kubernetes.io/projected/b344652c-decb-4b68-9eb4-dd034008cf98-kube-api-access-66xt6\") pod \"b344652c-decb-4b68-9eb4-dd034008cf98\" (UID: \"b344652c-decb-4b68-9eb4-dd034008cf98\") "
	Sep 30 19:54:28 addons-857381 kubelet[1195]: I0930 19:54:28.178398    1195 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b344652c-decb-4b68-9eb4-dd034008cf98-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "b344652c-decb-4b68-9eb4-dd034008cf98" (UID: "b344652c-decb-4b68-9eb4-dd034008cf98"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 30 19:54:28 addons-857381 kubelet[1195]: I0930 19:54:28.189350    1195 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b344652c-decb-4b68-9eb4-dd034008cf98-kube-api-access-66xt6" (OuterVolumeSpecName: "kube-api-access-66xt6") pod "b344652c-decb-4b68-9eb4-dd034008cf98" (UID: "b344652c-decb-4b68-9eb4-dd034008cf98"). InnerVolumeSpecName "kube-api-access-66xt6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 19:54:28 addons-857381 kubelet[1195]: I0930 19:54:28.278881    1195 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b344652c-decb-4b68-9eb4-dd034008cf98-tmp-dir\") on node \"addons-857381\" DevicePath \"\""
	Sep 30 19:54:28 addons-857381 kubelet[1195]: I0930 19:54:28.278922    1195 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-66xt6\" (UniqueName: \"kubernetes.io/projected/b344652c-decb-4b68-9eb4-dd034008cf98-kube-api-access-66xt6\") on node \"addons-857381\" DevicePath \"\""
	Sep 30 19:54:28 addons-857381 kubelet[1195]: I0930 19:54:28.282344    1195 scope.go:117] "RemoveContainer" containerID="c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013"
	Sep 30 19:54:28 addons-857381 kubelet[1195]: I0930 19:54:28.321392    1195 scope.go:117] "RemoveContainer" containerID="c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013"
	Sep 30 19:54:28 addons-857381 kubelet[1195]: E0930 19:54:28.321937    1195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013\": container with ID starting with c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013 not found: ID does not exist" containerID="c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013"
	Sep 30 19:54:28 addons-857381 kubelet[1195]: I0930 19:54:28.321995    1195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013"} err="failed to get container status \"c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013\": rpc error: code = NotFound desc = could not find container \"c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013\": container with ID starting with c6b2eb356f364b36c053fa5a0a1c21d994a9edc83b54fdd58a38023aea0e8013 not found: ID does not exist"
	
	
	==> storage-provisioner [34fdddbc2729cc844420cf24fc3341fed3211c151111cf0f43b8a87ed1b078ab] <==
	I0930 19:39:33.155826       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 19:39:33.685414       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 19:39:33.685583       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 19:39:33.816356       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 19:39:33.824546       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-857381_08dcb125-dcae-41ac-b31f-3f836116afa4!
	I0930 19:39:33.844765       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c2244a99-76a6-4c70-8326-d7436fd22acb", APIVersion:"v1", ResourceVersion:"651", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-857381_08dcb125-dcae-41ac-b31f-3f836116afa4 became leader
	I0930 19:39:34.127903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-857381_08dcb125-dcae-41ac-b31f-3f836116afa4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-857381 -n addons-857381
helpers_test.go:261: (dbg) Run:  kubectl --context addons-857381 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-857381 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-857381 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-857381/192.168.39.16
	Start Time:       Mon, 30 Sep 2024 19:40:55 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5fk2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k5fk2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-857381
	  Normal   Pulling    12m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m19s (x44 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (331.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 node stop m02 -v=7 --alsologtostderr
E0930 20:03:49.430672   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:04:09.912400   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:04:50.873745   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-805293 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.471901872s)

                                                
                                                
-- stdout --
	* Stopping node "ha-805293-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 20:03:48.697622   30316 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:03:48.697761   30316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:03:48.697769   30316 out.go:358] Setting ErrFile to fd 2...
	I0930 20:03:48.697774   30316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:03:48.697944   30316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:03:48.698242   30316 mustload.go:65] Loading cluster: ha-805293
	I0930 20:03:48.698629   30316 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:03:48.698644   30316 stop.go:39] StopHost: ha-805293-m02
	I0930 20:03:48.698997   30316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:03:48.699038   30316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:03:48.714675   30316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46613
	I0930 20:03:48.715182   30316 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:03:48.715798   30316 main.go:141] libmachine: Using API Version  1
	I0930 20:03:48.715819   30316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:03:48.716212   30316 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:03:48.718676   30316 out.go:177] * Stopping node "ha-805293-m02"  ...
	I0930 20:03:48.719915   30316 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 20:03:48.719968   30316 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:03:48.720186   30316 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 20:03:48.720216   30316 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:03:48.722991   30316 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:03:48.723378   30316 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:03:48.723416   30316 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:03:48.723544   30316 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:03:48.723724   30316 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:03:48.723888   30316 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:03:48.724008   30316 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:03:48.811504   30316 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0930 20:03:48.865512   30316 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0930 20:03:48.919631   30316 main.go:141] libmachine: Stopping "ha-805293-m02"...
	I0930 20:03:48.919661   30316 main.go:141] libmachine: (ha-805293-m02) Calling .GetState
	I0930 20:03:48.921117   30316 main.go:141] libmachine: (ha-805293-m02) Calling .Stop
	I0930 20:03:48.924502   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 0/120
	I0930 20:03:49.926128   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 1/120
	I0930 20:03:50.927508   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 2/120
	I0930 20:03:51.929798   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 3/120
	I0930 20:03:52.931365   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 4/120
	I0930 20:03:53.933505   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 5/120
	I0930 20:03:54.934993   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 6/120
	I0930 20:03:55.936664   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 7/120
	I0930 20:03:56.938127   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 8/120
	I0930 20:03:57.939758   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 9/120
	I0930 20:03:58.941065   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 10/120
	I0930 20:03:59.942469   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 11/120
	I0930 20:04:00.944122   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 12/120
	I0930 20:04:01.946117   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 13/120
	I0930 20:04:02.947670   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 14/120
	I0930 20:04:03.949889   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 15/120
	I0930 20:04:04.951731   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 16/120
	I0930 20:04:05.954051   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 17/120
	I0930 20:04:06.955550   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 18/120
	I0930 20:04:07.956897   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 19/120
	I0930 20:04:08.959034   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 20/120
	I0930 20:04:09.960366   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 21/120
	I0930 20:04:10.962347   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 22/120
	I0930 20:04:11.963854   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 23/120
	I0930 20:04:12.966042   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 24/120
	I0930 20:04:13.968352   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 25/120
	I0930 20:04:14.969882   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 26/120
	I0930 20:04:15.971193   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 27/120
	I0930 20:04:16.972672   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 28/120
	I0930 20:04:17.974018   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 29/120
	I0930 20:04:18.976175   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 30/120
	I0930 20:04:19.978214   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 31/120
	I0930 20:04:20.979790   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 32/120
	I0930 20:04:21.981941   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 33/120
	I0930 20:04:22.983587   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 34/120
	I0930 20:04:23.985499   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 35/120
	I0930 20:04:24.987180   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 36/120
	I0930 20:04:25.988646   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 37/120
	I0930 20:04:26.990177   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 38/120
	I0930 20:04:27.991415   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 39/120
	I0930 20:04:28.993510   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 40/120
	I0930 20:04:29.994845   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 41/120
	I0930 20:04:30.996578   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 42/120
	I0930 20:04:31.998202   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 43/120
	I0930 20:04:32.999852   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 44/120
	I0930 20:04:34.001935   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 45/120
	I0930 20:04:35.003447   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 46/120
	I0930 20:04:36.005015   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 47/120
	I0930 20:04:37.006448   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 48/120
	I0930 20:04:38.008211   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 49/120
	I0930 20:04:39.010419   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 50/120
	I0930 20:04:40.011880   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 51/120
	I0930 20:04:41.013832   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 52/120
	I0930 20:04:42.015693   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 53/120
	I0930 20:04:43.017963   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 54/120
	I0930 20:04:44.019582   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 55/120
	I0930 20:04:45.021765   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 56/120
	I0930 20:04:46.022918   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 57/120
	I0930 20:04:47.024531   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 58/120
	I0930 20:04:48.025795   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 59/120
	I0930 20:04:49.027386   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 60/120
	I0930 20:04:50.028656   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 61/120
	I0930 20:04:51.029880   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 62/120
	I0930 20:04:52.031554   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 63/120
	I0930 20:04:53.032835   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 64/120
	I0930 20:04:54.035005   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 65/120
	I0930 20:04:55.036322   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 66/120
	I0930 20:04:56.037916   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 67/120
	I0930 20:04:57.039241   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 68/120
	I0930 20:04:58.040750   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 69/120
	I0930 20:04:59.042335   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 70/120
	I0930 20:05:00.044064   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 71/120
	I0930 20:05:01.045481   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 72/120
	I0930 20:05:02.046872   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 73/120
	I0930 20:05:03.048221   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 74/120
	I0930 20:05:04.050196   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 75/120
	I0930 20:05:05.051873   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 76/120
	I0930 20:05:06.053961   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 77/120
	I0930 20:05:07.055416   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 78/120
	I0930 20:05:08.056701   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 79/120
	I0930 20:05:09.058456   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 80/120
	I0930 20:05:10.060000   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 81/120
	I0930 20:05:11.061523   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 82/120
	I0930 20:05:12.063596   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 83/120
	I0930 20:05:13.065180   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 84/120
	I0930 20:05:14.067168   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 85/120
	I0930 20:05:15.068778   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 86/120
	I0930 20:05:16.070333   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 87/120
	I0930 20:05:17.071820   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 88/120
	I0930 20:05:18.074298   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 89/120
	I0930 20:05:19.076332   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 90/120
	I0930 20:05:20.077909   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 91/120
	I0930 20:05:21.079493   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 92/120
	I0930 20:05:22.080657   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 93/120
	I0930 20:05:23.083115   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 94/120
	I0930 20:05:24.085021   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 95/120
	I0930 20:05:25.086552   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 96/120
	I0930 20:05:26.087778   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 97/120
	I0930 20:05:27.090111   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 98/120
	I0930 20:05:28.091547   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 99/120
	I0930 20:05:29.093549   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 100/120
	I0930 20:05:30.094929   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 101/120
	I0930 20:05:31.096327   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 102/120
	I0930 20:05:32.097671   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 103/120
	I0930 20:05:33.099069   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 104/120
	I0930 20:05:34.101063   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 105/120
	I0930 20:05:35.103136   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 106/120
	I0930 20:05:36.104538   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 107/120
	I0930 20:05:37.105898   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 108/120
	I0930 20:05:38.107241   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 109/120
	I0930 20:05:39.109460   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 110/120
	I0930 20:05:40.111157   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 111/120
	I0930 20:05:41.112624   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 112/120
	I0930 20:05:42.114579   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 113/120
	I0930 20:05:43.116298   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 114/120
	I0930 20:05:44.118588   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 115/120
	I0930 20:05:45.120012   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 116/120
	I0930 20:05:46.122004   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 117/120
	I0930 20:05:47.123620   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 118/120
	I0930 20:05:48.124939   30316 main.go:141] libmachine: (ha-805293-m02) Waiting for machine to stop 119/120
	I0930 20:05:49.125850   30316 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0930 20:05:49.125990   30316 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-805293 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr
E0930 20:05:55.315194   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr: (18.853457921s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-805293 -n ha-805293
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-805293 logs -n 25: (1.353510498s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3144947660/001/cp-test_ha-805293-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293:/home/docker/cp-test_ha-805293-m03_ha-805293.txt                       |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293 sudo cat                                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293.txt                                 |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m02:/home/docker/cp-test_ha-805293-m03_ha-805293-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m02 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04:/home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m04 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp testdata/cp-test.txt                                                | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3144947660/001/cp-test_ha-805293-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293:/home/docker/cp-test_ha-805293-m04_ha-805293.txt                       |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293 sudo cat                                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293.txt                                 |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m02:/home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m02 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03:/home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m03 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-805293 node stop m02 -v=7                                                     | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 19:59:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 19:59:16.465113   26315 out.go:345] Setting OutFile to fd 1 ...
	I0930 19:59:16.465408   26315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:59:16.465418   26315 out.go:358] Setting ErrFile to fd 2...
	I0930 19:59:16.465423   26315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:59:16.465672   26315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 19:59:16.466270   26315 out.go:352] Setting JSON to false
	I0930 19:59:16.467246   26315 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2499,"bootTime":1727723857,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 19:59:16.467349   26315 start.go:139] virtualization: kvm guest
	I0930 19:59:16.469778   26315 out.go:177] * [ha-805293] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 19:59:16.471083   26315 notify.go:220] Checking for updates...
	I0930 19:59:16.471129   26315 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 19:59:16.472574   26315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 19:59:16.474040   26315 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:59:16.475378   26315 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:59:16.476781   26315 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 19:59:16.478196   26315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 19:59:16.479555   26315 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 19:59:16.514287   26315 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 19:59:16.515592   26315 start.go:297] selected driver: kvm2
	I0930 19:59:16.515604   26315 start.go:901] validating driver "kvm2" against <nil>
	I0930 19:59:16.515615   26315 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 19:59:16.516299   26315 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:59:16.516372   26315 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 19:59:16.531012   26315 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 19:59:16.531063   26315 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 19:59:16.531292   26315 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 19:59:16.531318   26315 cni.go:84] Creating CNI manager for ""
	I0930 19:59:16.531357   26315 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0930 19:59:16.531370   26315 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 19:59:16.531430   26315 start.go:340] cluster config:
	{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0930 19:59:16.531545   26315 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:59:16.533673   26315 out.go:177] * Starting "ha-805293" primary control-plane node in "ha-805293" cluster
	I0930 19:59:16.534957   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:59:16.535009   26315 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 19:59:16.535023   26315 cache.go:56] Caching tarball of preloaded images
	I0930 19:59:16.535111   26315 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 19:59:16.535121   26315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 19:59:16.535489   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 19:59:16.535515   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json: {Name:mk695bb0575a50d6b6d53e3d2c18bb8666421806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:16.535704   26315 start.go:360] acquireMachinesLock for ha-805293: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 19:59:16.535734   26315 start.go:364] duration metric: took 15.84µs to acquireMachinesLock for "ha-805293"
	I0930 19:59:16.535751   26315 start.go:93] Provisioning new machine with config: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 19:59:16.535821   26315 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 19:59:16.537498   26315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 19:59:16.537633   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:59:16.537678   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:59:16.552377   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I0930 19:59:16.552824   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:59:16.553523   26315 main.go:141] libmachine: Using API Version  1
	I0930 19:59:16.553548   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:59:16.553949   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:59:16.554153   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:16.554354   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:16.554484   26315 start.go:159] libmachine.API.Create for "ha-805293" (driver="kvm2")
	I0930 19:59:16.554517   26315 client.go:168] LocalClient.Create starting
	I0930 19:59:16.554565   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 19:59:16.554602   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 19:59:16.554620   26315 main.go:141] libmachine: Parsing certificate...
	I0930 19:59:16.554688   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 19:59:16.554716   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 19:59:16.554736   26315 main.go:141] libmachine: Parsing certificate...
	I0930 19:59:16.554758   26315 main.go:141] libmachine: Running pre-create checks...
	I0930 19:59:16.554770   26315 main.go:141] libmachine: (ha-805293) Calling .PreCreateCheck
	I0930 19:59:16.555128   26315 main.go:141] libmachine: (ha-805293) Calling .GetConfigRaw
	I0930 19:59:16.555744   26315 main.go:141] libmachine: Creating machine...
	I0930 19:59:16.555765   26315 main.go:141] libmachine: (ha-805293) Calling .Create
	I0930 19:59:16.555931   26315 main.go:141] libmachine: (ha-805293) Creating KVM machine...
	I0930 19:59:16.557277   26315 main.go:141] libmachine: (ha-805293) DBG | found existing default KVM network
	I0930 19:59:16.557963   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:16.557842   26338 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I0930 19:59:16.558012   26315 main.go:141] libmachine: (ha-805293) DBG | created network xml: 
	I0930 19:59:16.558024   26315 main.go:141] libmachine: (ha-805293) DBG | <network>
	I0930 19:59:16.558032   26315 main.go:141] libmachine: (ha-805293) DBG |   <name>mk-ha-805293</name>
	I0930 19:59:16.558037   26315 main.go:141] libmachine: (ha-805293) DBG |   <dns enable='no'/>
	I0930 19:59:16.558041   26315 main.go:141] libmachine: (ha-805293) DBG |   
	I0930 19:59:16.558052   26315 main.go:141] libmachine: (ha-805293) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0930 19:59:16.558057   26315 main.go:141] libmachine: (ha-805293) DBG |     <dhcp>
	I0930 19:59:16.558063   26315 main.go:141] libmachine: (ha-805293) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0930 19:59:16.558073   26315 main.go:141] libmachine: (ha-805293) DBG |     </dhcp>
	I0930 19:59:16.558087   26315 main.go:141] libmachine: (ha-805293) DBG |   </ip>
	I0930 19:59:16.558111   26315 main.go:141] libmachine: (ha-805293) DBG |   
	I0930 19:59:16.558145   26315 main.go:141] libmachine: (ha-805293) DBG | </network>
	I0930 19:59:16.558156   26315 main.go:141] libmachine: (ha-805293) DBG | 
	I0930 19:59:16.563671   26315 main.go:141] libmachine: (ha-805293) DBG | trying to create private KVM network mk-ha-805293 192.168.39.0/24...
	I0930 19:59:16.628841   26315 main.go:141] libmachine: (ha-805293) DBG | private KVM network mk-ha-805293 192.168.39.0/24 created
	I0930 19:59:16.628870   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:16.628827   26338 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:59:16.628892   26315 main.go:141] libmachine: (ha-805293) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293 ...
	I0930 19:59:16.628909   26315 main.go:141] libmachine: (ha-805293) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 19:59:16.629064   26315 main.go:141] libmachine: (ha-805293) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 19:59:16.879937   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:16.879799   26338 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa...
	I0930 19:59:17.039302   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:17.039101   26338 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/ha-805293.rawdisk...
	I0930 19:59:17.039341   26315 main.go:141] libmachine: (ha-805293) DBG | Writing magic tar header
	I0930 19:59:17.039359   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293 (perms=drwx------)
	I0930 19:59:17.039382   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 19:59:17.039389   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 19:59:17.039398   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 19:59:17.039404   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 19:59:17.039415   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 19:59:17.039420   26315 main.go:141] libmachine: (ha-805293) Creating domain...
	I0930 19:59:17.039450   26315 main.go:141] libmachine: (ha-805293) DBG | Writing SSH key tar header
	I0930 19:59:17.039468   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:17.039218   26338 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293 ...
	I0930 19:59:17.039478   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293
	I0930 19:59:17.039485   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 19:59:17.039546   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:59:17.039570   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 19:59:17.039613   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 19:59:17.039667   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins
	I0930 19:59:17.039707   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home
	I0930 19:59:17.039720   26315 main.go:141] libmachine: (ha-805293) DBG | Skipping /home - not owner
	I0930 19:59:17.040595   26315 main.go:141] libmachine: (ha-805293) define libvirt domain using xml: 
	I0930 19:59:17.040607   26315 main.go:141] libmachine: (ha-805293) <domain type='kvm'>
	I0930 19:59:17.040612   26315 main.go:141] libmachine: (ha-805293)   <name>ha-805293</name>
	I0930 19:59:17.040617   26315 main.go:141] libmachine: (ha-805293)   <memory unit='MiB'>2200</memory>
	I0930 19:59:17.040621   26315 main.go:141] libmachine: (ha-805293)   <vcpu>2</vcpu>
	I0930 19:59:17.040625   26315 main.go:141] libmachine: (ha-805293)   <features>
	I0930 19:59:17.040630   26315 main.go:141] libmachine: (ha-805293)     <acpi/>
	I0930 19:59:17.040633   26315 main.go:141] libmachine: (ha-805293)     <apic/>
	I0930 19:59:17.040638   26315 main.go:141] libmachine: (ha-805293)     <pae/>
	I0930 19:59:17.040642   26315 main.go:141] libmachine: (ha-805293)     
	I0930 19:59:17.040649   26315 main.go:141] libmachine: (ha-805293)   </features>
	I0930 19:59:17.040654   26315 main.go:141] libmachine: (ha-805293)   <cpu mode='host-passthrough'>
	I0930 19:59:17.040661   26315 main.go:141] libmachine: (ha-805293)   
	I0930 19:59:17.040664   26315 main.go:141] libmachine: (ha-805293)   </cpu>
	I0930 19:59:17.040671   26315 main.go:141] libmachine: (ha-805293)   <os>
	I0930 19:59:17.040675   26315 main.go:141] libmachine: (ha-805293)     <type>hvm</type>
	I0930 19:59:17.040680   26315 main.go:141] libmachine: (ha-805293)     <boot dev='cdrom'/>
	I0930 19:59:17.040692   26315 main.go:141] libmachine: (ha-805293)     <boot dev='hd'/>
	I0930 19:59:17.040703   26315 main.go:141] libmachine: (ha-805293)     <bootmenu enable='no'/>
	I0930 19:59:17.040714   26315 main.go:141] libmachine: (ha-805293)   </os>
	I0930 19:59:17.040724   26315 main.go:141] libmachine: (ha-805293)   <devices>
	I0930 19:59:17.040732   26315 main.go:141] libmachine: (ha-805293)     <disk type='file' device='cdrom'>
	I0930 19:59:17.040739   26315 main.go:141] libmachine: (ha-805293)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/boot2docker.iso'/>
	I0930 19:59:17.040757   26315 main.go:141] libmachine: (ha-805293)       <target dev='hdc' bus='scsi'/>
	I0930 19:59:17.040766   26315 main.go:141] libmachine: (ha-805293)       <readonly/>
	I0930 19:59:17.040770   26315 main.go:141] libmachine: (ha-805293)     </disk>
	I0930 19:59:17.040776   26315 main.go:141] libmachine: (ha-805293)     <disk type='file' device='disk'>
	I0930 19:59:17.040783   26315 main.go:141] libmachine: (ha-805293)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 19:59:17.040791   26315 main.go:141] libmachine: (ha-805293)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/ha-805293.rawdisk'/>
	I0930 19:59:17.040797   26315 main.go:141] libmachine: (ha-805293)       <target dev='hda' bus='virtio'/>
	I0930 19:59:17.040802   26315 main.go:141] libmachine: (ha-805293)     </disk>
	I0930 19:59:17.040808   26315 main.go:141] libmachine: (ha-805293)     <interface type='network'>
	I0930 19:59:17.040814   26315 main.go:141] libmachine: (ha-805293)       <source network='mk-ha-805293'/>
	I0930 19:59:17.040822   26315 main.go:141] libmachine: (ha-805293)       <model type='virtio'/>
	I0930 19:59:17.040829   26315 main.go:141] libmachine: (ha-805293)     </interface>
	I0930 19:59:17.040833   26315 main.go:141] libmachine: (ha-805293)     <interface type='network'>
	I0930 19:59:17.040840   26315 main.go:141] libmachine: (ha-805293)       <source network='default'/>
	I0930 19:59:17.040844   26315 main.go:141] libmachine: (ha-805293)       <model type='virtio'/>
	I0930 19:59:17.040850   26315 main.go:141] libmachine: (ha-805293)     </interface>
	I0930 19:59:17.040855   26315 main.go:141] libmachine: (ha-805293)     <serial type='pty'>
	I0930 19:59:17.040860   26315 main.go:141] libmachine: (ha-805293)       <target port='0'/>
	I0930 19:59:17.040865   26315 main.go:141] libmachine: (ha-805293)     </serial>
	I0930 19:59:17.040871   26315 main.go:141] libmachine: (ha-805293)     <console type='pty'>
	I0930 19:59:17.040877   26315 main.go:141] libmachine: (ha-805293)       <target type='serial' port='0'/>
	I0930 19:59:17.040882   26315 main.go:141] libmachine: (ha-805293)     </console>
	I0930 19:59:17.040888   26315 main.go:141] libmachine: (ha-805293)     <rng model='virtio'>
	I0930 19:59:17.040894   26315 main.go:141] libmachine: (ha-805293)       <backend model='random'>/dev/random</backend>
	I0930 19:59:17.040901   26315 main.go:141] libmachine: (ha-805293)     </rng>
	I0930 19:59:17.040907   26315 main.go:141] libmachine: (ha-805293)     
	I0930 19:59:17.040917   26315 main.go:141] libmachine: (ha-805293)     
	I0930 19:59:17.040925   26315 main.go:141] libmachine: (ha-805293)   </devices>
	I0930 19:59:17.040928   26315 main.go:141] libmachine: (ha-805293) </domain>
	I0930 19:59:17.040937   26315 main.go:141] libmachine: (ha-805293) 
	I0930 19:59:17.045576   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:16:26:46 in network default
	I0930 19:59:17.046091   26315 main.go:141] libmachine: (ha-805293) Ensuring networks are active...
	I0930 19:59:17.046110   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:17.046918   26315 main.go:141] libmachine: (ha-805293) Ensuring network default is active
	I0930 19:59:17.047170   26315 main.go:141] libmachine: (ha-805293) Ensuring network mk-ha-805293 is active
	I0930 19:59:17.048069   26315 main.go:141] libmachine: (ha-805293) Getting domain xml...
	I0930 19:59:17.048925   26315 main.go:141] libmachine: (ha-805293) Creating domain...
	I0930 19:59:18.262935   26315 main.go:141] libmachine: (ha-805293) Waiting to get IP...
	I0930 19:59:18.263713   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:18.264097   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:18.264150   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:18.264077   26338 retry.go:31] will retry after 272.130038ms: waiting for machine to come up
	I0930 19:59:18.537624   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:18.538207   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:18.538236   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:18.538152   26338 retry.go:31] will retry after 384.976128ms: waiting for machine to come up
	I0930 19:59:18.924813   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:18.925224   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:18.925244   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:18.925193   26338 retry.go:31] will retry after 439.036671ms: waiting for machine to come up
	I0930 19:59:19.365792   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:19.366237   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:19.366268   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:19.366201   26338 retry.go:31] will retry after 523.251996ms: waiting for machine to come up
	I0930 19:59:19.890884   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:19.891377   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:19.891399   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:19.891276   26338 retry.go:31] will retry after 505.591634ms: waiting for machine to come up
	I0930 19:59:20.398064   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:20.398495   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:20.398518   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:20.398434   26338 retry.go:31] will retry after 840.243199ms: waiting for machine to come up
	I0930 19:59:21.240528   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:21.240974   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:21.241011   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:21.240928   26338 retry.go:31] will retry after 727.422374ms: waiting for machine to come up
	I0930 19:59:21.970399   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:21.970994   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:21.971027   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:21.970937   26338 retry.go:31] will retry after 1.250553906s: waiting for machine to come up
	I0930 19:59:23.223257   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:23.223588   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:23.223617   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:23.223524   26338 retry.go:31] will retry after 1.498180761s: waiting for machine to come up
	I0930 19:59:24.724089   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:24.724526   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:24.724547   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:24.724490   26338 retry.go:31] will retry after 1.710980244s: waiting for machine to come up
	I0930 19:59:26.437365   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:26.437733   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:26.437791   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:26.437707   26338 retry.go:31] will retry after 1.996131833s: waiting for machine to come up
	I0930 19:59:28.435394   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:28.435899   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:28.435920   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:28.435854   26338 retry.go:31] will retry after 2.313700889s: waiting for machine to come up
	I0930 19:59:30.752853   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:30.753113   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:30.753140   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:30.753096   26338 retry.go:31] will retry after 2.892875975s: waiting for machine to come up
	I0930 19:59:33.648697   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:33.649006   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:33.649067   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:33.648958   26338 retry.go:31] will retry after 4.162794884s: waiting for machine to come up
	I0930 19:59:37.813324   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.813940   26315 main.go:141] libmachine: (ha-805293) Found IP for machine: 192.168.39.3
	I0930 19:59:37.813967   26315 main.go:141] libmachine: (ha-805293) Reserving static IP address...
	I0930 19:59:37.813980   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has current primary IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.814363   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find host DHCP lease matching {name: "ha-805293", mac: "52:54:00:a8:b8:c7", ip: "192.168.39.3"} in network mk-ha-805293
	I0930 19:59:37.894677   26315 main.go:141] libmachine: (ha-805293) DBG | Getting to WaitForSSH function...
	I0930 19:59:37.894706   26315 main.go:141] libmachine: (ha-805293) Reserved static IP address: 192.168.39.3
	I0930 19:59:37.894719   26315 main.go:141] libmachine: (ha-805293) Waiting for SSH to be available...
	I0930 19:59:37.897595   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.897922   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:37.897956   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.898087   26315 main.go:141] libmachine: (ha-805293) DBG | Using SSH client type: external
	I0930 19:59:37.898106   26315 main.go:141] libmachine: (ha-805293) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa (-rw-------)
	I0930 19:59:37.898139   26315 main.go:141] libmachine: (ha-805293) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 19:59:37.898155   26315 main.go:141] libmachine: (ha-805293) DBG | About to run SSH command:
	I0930 19:59:37.898169   26315 main.go:141] libmachine: (ha-805293) DBG | exit 0
	I0930 19:59:38.031893   26315 main.go:141] libmachine: (ha-805293) DBG | SSH cmd err, output: <nil>: 
	I0930 19:59:38.032180   26315 main.go:141] libmachine: (ha-805293) KVM machine creation complete!
	I0930 19:59:38.032650   26315 main.go:141] libmachine: (ha-805293) Calling .GetConfigRaw
	I0930 19:59:38.033332   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:38.033535   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:38.033703   26315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 19:59:38.033722   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 19:59:38.035148   26315 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 19:59:38.035166   26315 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 19:59:38.035171   26315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 19:59:38.035176   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.037430   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.037779   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.037807   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.037886   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.038058   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.038172   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.038292   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.038466   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.038732   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.038742   26315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 19:59:38.150707   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:59:38.150736   26315 main.go:141] libmachine: Detecting the provisioner...
	I0930 19:59:38.150744   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.153577   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.153985   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.154015   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.154165   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.154420   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.154616   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.154796   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.154961   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.155144   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.155155   26315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 19:59:38.268071   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 19:59:38.268223   26315 main.go:141] libmachine: found compatible host: buildroot
	I0930 19:59:38.268235   26315 main.go:141] libmachine: Provisioning with buildroot...
	I0930 19:59:38.268248   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:38.268485   26315 buildroot.go:166] provisioning hostname "ha-805293"
	I0930 19:59:38.268519   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:38.268699   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.271029   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.271351   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.271376   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.271551   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.271727   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.271905   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.272048   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.272215   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.272420   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.272431   26315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293 && echo "ha-805293" | sudo tee /etc/hostname
	I0930 19:59:38.397989   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293
	
	I0930 19:59:38.398019   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.401388   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.401792   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.401818   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.402043   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.402262   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.402446   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.402640   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.402835   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.403014   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.403030   26315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 19:59:38.523981   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:59:38.524025   26315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 19:59:38.524082   26315 buildroot.go:174] setting up certificates
	I0930 19:59:38.524097   26315 provision.go:84] configureAuth start
	I0930 19:59:38.524111   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:38.524383   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:38.527277   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.527630   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.527658   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.527836   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.530619   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.530940   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.530964   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.531100   26315 provision.go:143] copyHostCerts
	I0930 19:59:38.531123   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 19:59:38.531167   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 19:59:38.531177   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 19:59:38.531239   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 19:59:38.531347   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 19:59:38.531367   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 19:59:38.531371   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 19:59:38.531397   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 19:59:38.531451   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 19:59:38.531467   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 19:59:38.531473   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 19:59:38.531511   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 19:59:38.531604   26315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293 san=[127.0.0.1 192.168.39.3 ha-805293 localhost minikube]
	I0930 19:59:38.676763   26315 provision.go:177] copyRemoteCerts
	I0930 19:59:38.676824   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 19:59:38.676847   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.679571   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.680006   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.680032   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.680205   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.680392   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.680556   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.680720   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:38.765532   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 19:59:38.765609   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 19:59:38.789748   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 19:59:38.789818   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 19:59:38.811783   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 19:59:38.811868   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 19:59:38.834125   26315 provision.go:87] duration metric: took 310.01212ms to configureAuth
	I0930 19:59:38.834160   26315 buildroot.go:189] setting minikube options for container-runtime
	I0930 19:59:38.834431   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:59:38.834524   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.837303   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.837631   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.837775   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.838052   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.838232   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.838399   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.838530   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.838676   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.838897   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.838918   26315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 19:59:39.069352   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 19:59:39.069381   26315 main.go:141] libmachine: Checking connection to Docker...
	I0930 19:59:39.069395   26315 main.go:141] libmachine: (ha-805293) Calling .GetURL
	I0930 19:59:39.070641   26315 main.go:141] libmachine: (ha-805293) DBG | Using libvirt version 6000000
	I0930 19:59:39.073164   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.073482   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.073521   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.073664   26315 main.go:141] libmachine: Docker is up and running!
	I0930 19:59:39.073675   26315 main.go:141] libmachine: Reticulating splines...
	I0930 19:59:39.073688   26315 client.go:171] duration metric: took 22.519163927s to LocalClient.Create
	I0930 19:59:39.073710   26315 start.go:167] duration metric: took 22.519226404s to libmachine.API.Create "ha-805293"
	I0930 19:59:39.073725   26315 start.go:293] postStartSetup for "ha-805293" (driver="kvm2")
	I0930 19:59:39.073739   26315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 19:59:39.073759   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.073979   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 19:59:39.074068   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.076481   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.076820   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.076872   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.076969   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.077131   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.077256   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.077345   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:39.162144   26315 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 19:59:39.166524   26315 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 19:59:39.166551   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 19:59:39.166625   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 19:59:39.166691   26315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 19:59:39.166701   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 19:59:39.166826   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 19:59:39.175862   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 19:59:39.198495   26315 start.go:296] duration metric: took 124.748363ms for postStartSetup
	I0930 19:59:39.198552   26315 main.go:141] libmachine: (ha-805293) Calling .GetConfigRaw
	I0930 19:59:39.199175   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:39.202045   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.202447   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.202472   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.202702   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 19:59:39.202915   26315 start.go:128] duration metric: took 22.667085053s to createHost
	I0930 19:59:39.202950   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.205157   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.205495   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.205516   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.205668   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.205846   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.205981   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.206111   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.206270   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:39.206542   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:39.206565   26315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 19:59:39.320050   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727726379.295271539
	
	I0930 19:59:39.320076   26315 fix.go:216] guest clock: 1727726379.295271539
	I0930 19:59:39.320086   26315 fix.go:229] Guest: 2024-09-30 19:59:39.295271539 +0000 UTC Remote: 2024-09-30 19:59:39.202937168 +0000 UTC m=+22.774027114 (delta=92.334371ms)
	I0930 19:59:39.320118   26315 fix.go:200] guest clock delta is within tolerance: 92.334371ms
	I0930 19:59:39.320128   26315 start.go:83] releasing machines lock for "ha-805293", held for 22.784384982s
	I0930 19:59:39.320156   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.320464   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:39.323340   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.323749   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.323763   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.323980   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.324511   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.324710   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.324873   26315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 19:59:39.324922   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.324933   26315 ssh_runner.go:195] Run: cat /version.json
	I0930 19:59:39.324953   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.327479   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.327790   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.327833   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.327954   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.327975   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.328205   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.328371   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.328394   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.328435   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.328560   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.328620   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:39.328752   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.328910   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.329053   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:39.449869   26315 ssh_runner.go:195] Run: systemctl --version
	I0930 19:59:39.457140   26315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 19:59:39.620534   26315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 19:59:39.626812   26315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 19:59:39.626884   26315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 19:59:39.643150   26315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 19:59:39.643182   26315 start.go:495] detecting cgroup driver to use...
	I0930 19:59:39.643259   26315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 19:59:39.659582   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 19:59:39.673481   26315 docker.go:217] disabling cri-docker service (if available) ...
	I0930 19:59:39.673546   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 19:59:39.687166   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 19:59:39.700766   26315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 19:59:39.817845   26315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 19:59:39.989160   26315 docker.go:233] disabling docker service ...
	I0930 19:59:39.989251   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 19:59:40.003138   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 19:59:40.016004   26315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 19:59:40.149065   26315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 19:59:40.264254   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 19:59:40.278167   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 19:59:40.296364   26315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 19:59:40.296421   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.306661   26315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 19:59:40.306731   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.317138   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.327466   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.337951   26315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 19:59:40.348585   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.358684   26315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.375315   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.385587   26315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 19:59:40.394996   26315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 19:59:40.395092   26315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 19:59:40.408121   26315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 19:59:40.417783   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:59:40.532464   26315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 19:59:40.627203   26315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 19:59:40.627277   26315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 19:59:40.632142   26315 start.go:563] Will wait 60s for crictl version
	I0930 19:59:40.632198   26315 ssh_runner.go:195] Run: which crictl
	I0930 19:59:40.635892   26315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 19:59:40.673372   26315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 19:59:40.673453   26315 ssh_runner.go:195] Run: crio --version
	I0930 19:59:40.701810   26315 ssh_runner.go:195] Run: crio --version
	I0930 19:59:40.733603   26315 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 19:59:40.734810   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:40.737789   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:40.738162   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:40.738188   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:40.738414   26315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 19:59:40.742812   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:59:40.755762   26315 kubeadm.go:883] updating cluster {Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 19:59:40.755880   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:59:40.755941   26315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:59:40.795843   26315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 19:59:40.795919   26315 ssh_runner.go:195] Run: which lz4
	I0930 19:59:40.799847   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0930 19:59:40.799948   26315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 19:59:40.803954   26315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 19:59:40.803978   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 19:59:42.086885   26315 crio.go:462] duration metric: took 1.286971524s to copy over tarball
	I0930 19:59:42.086956   26315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 19:59:44.140911   26315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.053919148s)
	I0930 19:59:44.140946   26315 crio.go:469] duration metric: took 2.054033393s to extract the tarball
	I0930 19:59:44.140956   26315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 19:59:44.176934   26315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:59:44.223432   26315 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 19:59:44.223453   26315 cache_images.go:84] Images are preloaded, skipping loading
	I0930 19:59:44.223463   26315 kubeadm.go:934] updating node { 192.168.39.3 8443 v1.31.1 crio true true} ...
	I0930 19:59:44.223618   26315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 19:59:44.223687   26315 ssh_runner.go:195] Run: crio config
	I0930 19:59:44.267892   26315 cni.go:84] Creating CNI manager for ""
	I0930 19:59:44.267913   26315 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 19:59:44.267927   26315 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 19:59:44.267969   26315 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-805293 NodeName:ha-805293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 19:59:44.268143   26315 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-805293"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 19:59:44.268174   26315 kube-vip.go:115] generating kube-vip config ...
	I0930 19:59:44.268226   26315 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 19:59:44.290057   26315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 19:59:44.290186   26315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0930 19:59:44.290252   26315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 19:59:44.300619   26315 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 19:59:44.300694   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 19:59:44.312702   26315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0930 19:59:44.329980   26315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 19:59:44.347106   26315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0930 19:59:44.363429   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0930 19:59:44.379706   26315 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 19:59:44.383786   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:59:44.396392   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:59:44.511834   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 19:59:44.528890   26315 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.3
	I0930 19:59:44.528918   26315 certs.go:194] generating shared ca certs ...
	I0930 19:59:44.528990   26315 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.529203   26315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 19:59:44.529261   26315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 19:59:44.529273   26315 certs.go:256] generating profile certs ...
	I0930 19:59:44.529338   26315 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 19:59:44.529377   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt with IP's: []
	I0930 19:59:44.693203   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt ...
	I0930 19:59:44.693232   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt: {Name:mk4ee04dd06bd91d73f7f1298e33968b422b097c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.693403   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key ...
	I0930 19:59:44.693413   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key: {Name:mk2b8ad6c09983ddb0203e6dca1df4008d2fe717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.693487   26315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78
	I0930 19:59:44.693501   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.254]
	I0930 19:59:44.767682   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78 ...
	I0930 19:59:44.767709   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78: {Name:mkf1b16d36ab45268d051f89cfe928869656e760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.767864   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78 ...
	I0930 19:59:44.767875   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78: {Name:mk53eca62135b4c1b261b7c937012d89f293e976 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.767944   26315 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 19:59:44.768026   26315 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 19:59:44.768082   26315 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 19:59:44.768096   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt with IP's: []
	I0930 19:59:45.223535   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt ...
	I0930 19:59:45.223567   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt: {Name:mke738cc3ccc573243158c6f5e5f022828f32c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:45.223723   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key ...
	I0930 19:59:45.223733   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key: {Name:mkbfe8ac8fc7a409b1152c27d19ceb3cdc436834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:45.223814   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 19:59:45.223831   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 19:59:45.223844   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 19:59:45.223854   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 19:59:45.223865   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 19:59:45.223889   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 19:59:45.223908   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 19:59:45.223920   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 19:59:45.223964   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 19:59:45.224006   26315 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 19:59:45.224013   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 19:59:45.224036   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 19:59:45.224057   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 19:59:45.224083   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 19:59:45.224119   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 19:59:45.224143   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.224156   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.224168   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.224809   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 19:59:45.251773   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 19:59:45.283221   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 19:59:45.307169   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 19:59:45.340795   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 19:59:45.364921   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 19:59:45.388786   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 19:59:45.412412   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 19:59:45.437530   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 19:59:45.462538   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 19:59:45.486247   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 19:59:45.510070   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 19:59:45.527040   26315 ssh_runner.go:195] Run: openssl version
	I0930 19:59:45.532953   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 19:59:45.544314   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.548732   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.548808   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.554737   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 19:59:45.565237   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 19:59:45.576275   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.580833   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.580899   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.586723   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 19:59:45.597151   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 19:59:45.607829   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.612479   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.612538   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.618560   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 19:59:45.629886   26315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 19:59:45.634469   26315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 19:59:45.634548   26315 kubeadm.go:392] StartCluster: {Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:59:45.634646   26315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 19:59:45.634717   26315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 19:59:45.672608   26315 cri.go:89] found id: ""
	I0930 19:59:45.672680   26315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 19:59:45.682253   26315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 19:59:45.695746   26315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 19:59:45.707747   26315 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 19:59:45.707771   26315 kubeadm.go:157] found existing configuration files:
	
	I0930 19:59:45.707824   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 19:59:45.717218   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 19:59:45.717271   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 19:59:45.727134   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 19:59:45.736453   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 19:59:45.736514   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 19:59:45.746137   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 19:59:45.755226   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 19:59:45.755300   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 19:59:45.765188   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 19:59:45.774772   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 19:59:45.774830   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 19:59:45.784513   26315 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 19:59:45.891942   26315 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 19:59:45.891997   26315 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 19:59:45.998241   26315 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 19:59:45.998404   26315 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 19:59:45.998552   26315 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 19:59:46.014075   26315 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 19:59:46.112806   26315 out.go:235]   - Generating certificates and keys ...
	I0930 19:59:46.112955   26315 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 19:59:46.113026   26315 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 19:59:46.210951   26315 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 19:59:46.354582   26315 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 19:59:46.555785   26315 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 19:59:46.646311   26315 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 19:59:46.770735   26315 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 19:59:46.770873   26315 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-805293 localhost] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0930 19:59:47.044600   26315 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 19:59:47.044796   26315 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-805293 localhost] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0930 19:59:47.135575   26315 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 19:59:47.309550   26315 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 19:59:47.407346   26315 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 19:59:47.407491   26315 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 19:59:47.782301   26315 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 19:59:47.938840   26315 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 19:59:48.153368   26315 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 19:59:48.373848   26315 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 19:59:48.924719   26315 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 19:59:48.925435   26315 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 19:59:48.929527   26315 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 19:59:48.931731   26315 out.go:235]   - Booting up control plane ...
	I0930 19:59:48.931901   26315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 19:59:48.931984   26315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 19:59:48.932610   26315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 19:59:48.952672   26315 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 19:59:48.959981   26315 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 19:59:48.960193   26315 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 19:59:49.095726   26315 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 19:59:49.095850   26315 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 19:59:49.596721   26315 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.116798ms
	I0930 19:59:49.596826   26315 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 19:59:55.702855   26315 kubeadm.go:310] [api-check] The API server is healthy after 6.110016436s
	I0930 19:59:55.715163   26315 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 19:59:55.739975   26315 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 19:59:56.278812   26315 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 19:59:56.279051   26315 kubeadm.go:310] [mark-control-plane] Marking the node ha-805293 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 19:59:56.293005   26315 kubeadm.go:310] [bootstrap-token] Using token: p0s0d4.yc45k5nzuh1mipkz
	I0930 19:59:56.294535   26315 out.go:235]   - Configuring RBAC rules ...
	I0930 19:59:56.294681   26315 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 19:59:56.299474   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 19:59:56.308838   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 19:59:56.312908   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 19:59:56.320143   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 19:59:56.328834   26315 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 19:59:56.351618   26315 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 19:59:56.617778   26315 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 19:59:57.116458   26315 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 19:59:57.116486   26315 kubeadm.go:310] 
	I0930 19:59:57.116560   26315 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 19:59:57.116570   26315 kubeadm.go:310] 
	I0930 19:59:57.116674   26315 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 19:59:57.116685   26315 kubeadm.go:310] 
	I0930 19:59:57.116719   26315 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 19:59:57.116823   26315 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 19:59:57.116882   26315 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 19:59:57.116886   26315 kubeadm.go:310] 
	I0930 19:59:57.116955   26315 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 19:59:57.116980   26315 kubeadm.go:310] 
	I0930 19:59:57.117053   26315 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 19:59:57.117064   26315 kubeadm.go:310] 
	I0930 19:59:57.117137   26315 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 19:59:57.117202   26315 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 19:59:57.117263   26315 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 19:59:57.117268   26315 kubeadm.go:310] 
	I0930 19:59:57.117377   26315 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 19:59:57.117490   26315 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 19:59:57.117501   26315 kubeadm.go:310] 
	I0930 19:59:57.117607   26315 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p0s0d4.yc45k5nzuh1mipkz \
	I0930 19:59:57.117749   26315 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 19:59:57.117783   26315 kubeadm.go:310] 	--control-plane 
	I0930 19:59:57.117789   26315 kubeadm.go:310] 
	I0930 19:59:57.117912   26315 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 19:59:57.117922   26315 kubeadm.go:310] 
	I0930 19:59:57.117993   26315 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p0s0d4.yc45k5nzuh1mipkz \
	I0930 19:59:57.118080   26315 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 19:59:57.119219   26315 kubeadm.go:310] W0930 19:59:45.871969     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:59:57.119559   26315 kubeadm.go:310] W0930 19:59:45.872918     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:59:57.119653   26315 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 19:59:57.119676   26315 cni.go:84] Creating CNI manager for ""
	I0930 19:59:57.119684   26315 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 19:59:57.121508   26315 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0930 19:59:57.122778   26315 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0930 19:59:57.129018   26315 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0930 19:59:57.129033   26315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0930 19:59:57.148058   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0930 19:59:57.490355   26315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 19:59:57.490415   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:57.490422   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-805293 minikube.k8s.io/updated_at=2024_09_30T19_59_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=ha-805293 minikube.k8s.io/primary=true
	I0930 19:59:57.530433   26315 ops.go:34] apiserver oom_adj: -16
	I0930 19:59:57.632942   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:58.133232   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:58.633968   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:59.133876   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:59.633715   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:00.134062   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:00.633798   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:01.133378   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:01.219465   26315 kubeadm.go:1113] duration metric: took 3.729111543s to wait for elevateKubeSystemPrivileges
	I0930 20:00:01.219521   26315 kubeadm.go:394] duration metric: took 15.584976844s to StartCluster
	I0930 20:00:01.219559   26315 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:01.219656   26315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:00:01.220437   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:01.220719   26315 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:01.220739   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 20:00:01.220750   26315 start.go:241] waiting for startup goroutines ...
	I0930 20:00:01.220771   26315 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 20:00:01.220861   26315 addons.go:69] Setting storage-provisioner=true in profile "ha-805293"
	I0930 20:00:01.220890   26315 addons.go:234] Setting addon storage-provisioner=true in "ha-805293"
	I0930 20:00:01.220907   26315 addons.go:69] Setting default-storageclass=true in profile "ha-805293"
	I0930 20:00:01.220929   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:01.220943   26315 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-805293"
	I0930 20:00:01.220958   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:01.221373   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.221421   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.221455   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.221495   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.237192   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38991
	I0930 20:00:01.237232   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44093
	I0930 20:00:01.237724   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.237776   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.238255   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.238280   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.238371   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.238394   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.238662   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.238738   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.238902   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:01.239184   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.239227   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.241145   26315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:00:01.241484   26315 kapi.go:59] client config for ha-805293: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 20:00:01.242040   26315 cert_rotation.go:140] Starting client certificate rotation controller
	I0930 20:00:01.242321   26315 addons.go:234] Setting addon default-storageclass=true in "ha-805293"
	I0930 20:00:01.242364   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:01.242753   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.242800   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.255454   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34783
	I0930 20:00:01.255998   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.256626   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.256655   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.257008   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.257244   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:01.258602   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38221
	I0930 20:00:01.259101   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.259492   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:01.259705   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.259732   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.260119   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.260656   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.260698   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.261796   26315 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:00:01.263230   26315 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 20:00:01.263251   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 20:00:01.263275   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:01.266511   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.266953   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:01.266979   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.267159   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:01.267342   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:01.267495   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:01.267640   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:01.276774   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42613
	I0930 20:00:01.277256   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.277779   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.277808   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.278167   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.278348   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:01.279998   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:01.280191   26315 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 20:00:01.280204   26315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 20:00:01.280218   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:01.282743   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.283181   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:01.283205   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.283377   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:01.283566   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:01.283719   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:01.283866   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:01.308679   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 20:00:01.431260   26315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 20:00:01.433924   26315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 20:00:01.558490   26315 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0930 20:00:01.621587   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.621614   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.621883   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.621900   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.621908   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.621931   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.621995   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.622217   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.622234   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.622247   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.622328   26315 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 20:00:01.622377   26315 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 20:00:01.622485   26315 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0930 20:00:01.622496   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:01.622504   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:01.622508   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:01.630544   26315 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 20:00:01.631089   26315 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0930 20:00:01.631103   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:01.631110   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:01.631115   26315 round_trippers.go:473]     Content-Type: application/json
	I0930 20:00:01.631119   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:01.636731   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:00:01.636889   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.636905   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.637222   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.637249   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.637227   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.910454   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.910493   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.910790   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.910900   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.910916   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.910928   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.910933   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.911215   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.911245   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.911255   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.913341   26315 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0930 20:00:01.914640   26315 addons.go:510] duration metric: took 693.870653ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0930 20:00:01.914685   26315 start.go:246] waiting for cluster config update ...
	I0930 20:00:01.914700   26315 start.go:255] writing updated cluster config ...
	I0930 20:00:01.917528   26315 out.go:201] 
	I0930 20:00:01.919324   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:01.919441   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:00:01.921983   26315 out.go:177] * Starting "ha-805293-m02" control-plane node in "ha-805293" cluster
	I0930 20:00:01.923837   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:00:01.923877   26315 cache.go:56] Caching tarball of preloaded images
	I0930 20:00:01.924007   26315 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:00:01.924027   26315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:00:01.924140   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:00:01.924406   26315 start.go:360] acquireMachinesLock for ha-805293-m02: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:00:01.924476   26315 start.go:364] duration metric: took 42.723µs to acquireMachinesLock for "ha-805293-m02"
	I0930 20:00:01.924503   26315 start.go:93] Provisioning new machine with config: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:01.924602   26315 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0930 20:00:01.926254   26315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 20:00:01.926373   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.926422   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.942099   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43055
	I0930 20:00:01.942642   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.943165   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.943189   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.943522   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.943810   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:01.943943   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:01.944136   26315 start.go:159] libmachine.API.Create for "ha-805293" (driver="kvm2")
	I0930 20:00:01.944171   26315 client.go:168] LocalClient.Create starting
	I0930 20:00:01.944215   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 20:00:01.944259   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:00:01.944280   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:00:01.944361   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 20:00:01.944395   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:00:01.944410   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:00:01.944433   26315 main.go:141] libmachine: Running pre-create checks...
	I0930 20:00:01.944443   26315 main.go:141] libmachine: (ha-805293-m02) Calling .PreCreateCheck
	I0930 20:00:01.944614   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetConfigRaw
	I0930 20:00:01.945016   26315 main.go:141] libmachine: Creating machine...
	I0930 20:00:01.945030   26315 main.go:141] libmachine: (ha-805293-m02) Calling .Create
	I0930 20:00:01.945196   26315 main.go:141] libmachine: (ha-805293-m02) Creating KVM machine...
	I0930 20:00:01.946629   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found existing default KVM network
	I0930 20:00:01.946731   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found existing private KVM network mk-ha-805293
	I0930 20:00:01.946865   26315 main.go:141] libmachine: (ha-805293-m02) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02 ...
	I0930 20:00:01.946894   26315 main.go:141] libmachine: (ha-805293-m02) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 20:00:01.946988   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:01.946872   26664 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:00:01.947079   26315 main.go:141] libmachine: (ha-805293-m02) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 20:00:02.217368   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:02.217234   26664 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa...
	I0930 20:00:02.510082   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:02.509926   26664 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/ha-805293-m02.rawdisk...
	I0930 20:00:02.510127   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Writing magic tar header
	I0930 20:00:02.510145   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Writing SSH key tar header
	I0930 20:00:02.510158   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:02.510035   26664 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02 ...
	I0930 20:00:02.510175   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02
	I0930 20:00:02.510188   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 20:00:02.510199   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:00:02.510217   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02 (perms=drwx------)
	I0930 20:00:02.510229   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 20:00:02.510240   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 20:00:02.510255   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 20:00:02.510266   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 20:00:02.510281   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 20:00:02.510294   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 20:00:02.510308   26315 main.go:141] libmachine: (ha-805293-m02) Creating domain...
	I0930 20:00:02.510328   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 20:00:02.510352   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins
	I0930 20:00:02.510359   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home
	I0930 20:00:02.510364   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Skipping /home - not owner
	I0930 20:00:02.511282   26315 main.go:141] libmachine: (ha-805293-m02) define libvirt domain using xml: 
	I0930 20:00:02.511306   26315 main.go:141] libmachine: (ha-805293-m02) <domain type='kvm'>
	I0930 20:00:02.511317   26315 main.go:141] libmachine: (ha-805293-m02)   <name>ha-805293-m02</name>
	I0930 20:00:02.511328   26315 main.go:141] libmachine: (ha-805293-m02)   <memory unit='MiB'>2200</memory>
	I0930 20:00:02.511338   26315 main.go:141] libmachine: (ha-805293-m02)   <vcpu>2</vcpu>
	I0930 20:00:02.511348   26315 main.go:141] libmachine: (ha-805293-m02)   <features>
	I0930 20:00:02.511357   26315 main.go:141] libmachine: (ha-805293-m02)     <acpi/>
	I0930 20:00:02.511364   26315 main.go:141] libmachine: (ha-805293-m02)     <apic/>
	I0930 20:00:02.511371   26315 main.go:141] libmachine: (ha-805293-m02)     <pae/>
	I0930 20:00:02.511377   26315 main.go:141] libmachine: (ha-805293-m02)     
	I0930 20:00:02.511388   26315 main.go:141] libmachine: (ha-805293-m02)   </features>
	I0930 20:00:02.511395   26315 main.go:141] libmachine: (ha-805293-m02)   <cpu mode='host-passthrough'>
	I0930 20:00:02.511405   26315 main.go:141] libmachine: (ha-805293-m02)   
	I0930 20:00:02.511416   26315 main.go:141] libmachine: (ha-805293-m02)   </cpu>
	I0930 20:00:02.511444   26315 main.go:141] libmachine: (ha-805293-m02)   <os>
	I0930 20:00:02.511468   26315 main.go:141] libmachine: (ha-805293-m02)     <type>hvm</type>
	I0930 20:00:02.511481   26315 main.go:141] libmachine: (ha-805293-m02)     <boot dev='cdrom'/>
	I0930 20:00:02.511494   26315 main.go:141] libmachine: (ha-805293-m02)     <boot dev='hd'/>
	I0930 20:00:02.511505   26315 main.go:141] libmachine: (ha-805293-m02)     <bootmenu enable='no'/>
	I0930 20:00:02.511512   26315 main.go:141] libmachine: (ha-805293-m02)   </os>
	I0930 20:00:02.511517   26315 main.go:141] libmachine: (ha-805293-m02)   <devices>
	I0930 20:00:02.511535   26315 main.go:141] libmachine: (ha-805293-m02)     <disk type='file' device='cdrom'>
	I0930 20:00:02.511552   26315 main.go:141] libmachine: (ha-805293-m02)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/boot2docker.iso'/>
	I0930 20:00:02.511561   26315 main.go:141] libmachine: (ha-805293-m02)       <target dev='hdc' bus='scsi'/>
	I0930 20:00:02.511591   26315 main.go:141] libmachine: (ha-805293-m02)       <readonly/>
	I0930 20:00:02.511613   26315 main.go:141] libmachine: (ha-805293-m02)     </disk>
	I0930 20:00:02.511630   26315 main.go:141] libmachine: (ha-805293-m02)     <disk type='file' device='disk'>
	I0930 20:00:02.511644   26315 main.go:141] libmachine: (ha-805293-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 20:00:02.511661   26315 main.go:141] libmachine: (ha-805293-m02)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/ha-805293-m02.rawdisk'/>
	I0930 20:00:02.511673   26315 main.go:141] libmachine: (ha-805293-m02)       <target dev='hda' bus='virtio'/>
	I0930 20:00:02.511692   26315 main.go:141] libmachine: (ha-805293-m02)     </disk>
	I0930 20:00:02.511711   26315 main.go:141] libmachine: (ha-805293-m02)     <interface type='network'>
	I0930 20:00:02.511729   26315 main.go:141] libmachine: (ha-805293-m02)       <source network='mk-ha-805293'/>
	I0930 20:00:02.511746   26315 main.go:141] libmachine: (ha-805293-m02)       <model type='virtio'/>
	I0930 20:00:02.511758   26315 main.go:141] libmachine: (ha-805293-m02)     </interface>
	I0930 20:00:02.511769   26315 main.go:141] libmachine: (ha-805293-m02)     <interface type='network'>
	I0930 20:00:02.511784   26315 main.go:141] libmachine: (ha-805293-m02)       <source network='default'/>
	I0930 20:00:02.511795   26315 main.go:141] libmachine: (ha-805293-m02)       <model type='virtio'/>
	I0930 20:00:02.511824   26315 main.go:141] libmachine: (ha-805293-m02)     </interface>
	I0930 20:00:02.511843   26315 main.go:141] libmachine: (ha-805293-m02)     <serial type='pty'>
	I0930 20:00:02.511853   26315 main.go:141] libmachine: (ha-805293-m02)       <target port='0'/>
	I0930 20:00:02.511862   26315 main.go:141] libmachine: (ha-805293-m02)     </serial>
	I0930 20:00:02.511870   26315 main.go:141] libmachine: (ha-805293-m02)     <console type='pty'>
	I0930 20:00:02.511881   26315 main.go:141] libmachine: (ha-805293-m02)       <target type='serial' port='0'/>
	I0930 20:00:02.511892   26315 main.go:141] libmachine: (ha-805293-m02)     </console>
	I0930 20:00:02.511901   26315 main.go:141] libmachine: (ha-805293-m02)     <rng model='virtio'>
	I0930 20:00:02.511910   26315 main.go:141] libmachine: (ha-805293-m02)       <backend model='random'>/dev/random</backend>
	I0930 20:00:02.511924   26315 main.go:141] libmachine: (ha-805293-m02)     </rng>
	I0930 20:00:02.511933   26315 main.go:141] libmachine: (ha-805293-m02)     
	I0930 20:00:02.511939   26315 main.go:141] libmachine: (ha-805293-m02)     
	I0930 20:00:02.511949   26315 main.go:141] libmachine: (ha-805293-m02)   </devices>
	I0930 20:00:02.511958   26315 main.go:141] libmachine: (ha-805293-m02) </domain>
	I0930 20:00:02.511969   26315 main.go:141] libmachine: (ha-805293-m02) 
	I0930 20:00:02.519423   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:35:68:69 in network default
	I0930 20:00:02.520096   26315 main.go:141] libmachine: (ha-805293-m02) Ensuring networks are active...
	I0930 20:00:02.520113   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:02.521080   26315 main.go:141] libmachine: (ha-805293-m02) Ensuring network default is active
	I0930 20:00:02.521471   26315 main.go:141] libmachine: (ha-805293-m02) Ensuring network mk-ha-805293 is active
	I0930 20:00:02.521811   26315 main.go:141] libmachine: (ha-805293-m02) Getting domain xml...
	I0930 20:00:02.522473   26315 main.go:141] libmachine: (ha-805293-m02) Creating domain...
	I0930 20:00:03.765540   26315 main.go:141] libmachine: (ha-805293-m02) Waiting to get IP...
	I0930 20:00:03.766353   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:03.766729   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:03.766750   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:03.766699   26664 retry.go:31] will retry after 241.920356ms: waiting for machine to come up
	I0930 20:00:04.010129   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:04.010801   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:04.010826   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:04.010761   26664 retry.go:31] will retry after 344.430245ms: waiting for machine to come up
	I0930 20:00:04.356311   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:04.356795   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:04.356815   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:04.356767   26664 retry.go:31] will retry after 377.488147ms: waiting for machine to come up
	I0930 20:00:04.736359   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:04.736817   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:04.736839   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:04.736768   26664 retry.go:31] will retry after 400.421105ms: waiting for machine to come up
	I0930 20:00:05.138514   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:05.139019   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:05.139050   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:05.138967   26664 retry.go:31] will retry after 547.144087ms: waiting for machine to come up
	I0930 20:00:05.688116   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:05.688838   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:05.688865   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:05.688769   26664 retry.go:31] will retry after 610.482897ms: waiting for machine to come up
	I0930 20:00:06.301403   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:06.301917   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:06.301945   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:06.301866   26664 retry.go:31] will retry after 792.553977ms: waiting for machine to come up
	I0930 20:00:07.096834   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:07.097300   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:07.097331   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:07.097234   26664 retry.go:31] will retry after 1.20008256s: waiting for machine to come up
	I0930 20:00:08.299714   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:08.300169   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:08.300191   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:08.300137   26664 retry.go:31] will retry after 1.678792143s: waiting for machine to come up
	I0930 20:00:09.980216   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:09.980657   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:09.980685   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:09.980618   26664 retry.go:31] will retry after 2.098959289s: waiting for machine to come up
	I0930 20:00:12.080886   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:12.081433   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:12.081474   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:12.081377   26664 retry.go:31] will retry after 2.748866897s: waiting for machine to come up
	I0930 20:00:14.833188   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:14.833722   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:14.833748   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:14.833682   26664 retry.go:31] will retry after 2.379918836s: waiting for machine to come up
	I0930 20:00:17.215678   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:17.216060   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:17.216093   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:17.215999   26664 retry.go:31] will retry after 4.355514313s: waiting for machine to come up
	I0930 20:00:21.576523   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.577032   26315 main.go:141] libmachine: (ha-805293-m02) Found IP for machine: 192.168.39.220
	I0930 20:00:21.577053   26315 main.go:141] libmachine: (ha-805293-m02) Reserving static IP address...
	I0930 20:00:21.577065   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has current primary IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.577388   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find host DHCP lease matching {name: "ha-805293-m02", mac: "52:54:00:fe:f4:56", ip: "192.168.39.220"} in network mk-ha-805293
	I0930 20:00:21.655408   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Getting to WaitForSSH function...
	I0930 20:00:21.655444   26315 main.go:141] libmachine: (ha-805293-m02) Reserved static IP address: 192.168.39.220
	I0930 20:00:21.655509   26315 main.go:141] libmachine: (ha-805293-m02) Waiting for SSH to be available...
	I0930 20:00:21.658005   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.658453   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:21.658491   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.658732   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Using SSH client type: external
	I0930 20:00:21.658759   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa (-rw-------)
	I0930 20:00:21.658792   26315 main.go:141] libmachine: (ha-805293-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 20:00:21.658808   26315 main.go:141] libmachine: (ha-805293-m02) DBG | About to run SSH command:
	I0930 20:00:21.658825   26315 main.go:141] libmachine: (ha-805293-m02) DBG | exit 0
	I0930 20:00:21.787681   26315 main.go:141] libmachine: (ha-805293-m02) DBG | SSH cmd err, output: <nil>: 
	I0930 20:00:21.788011   26315 main.go:141] libmachine: (ha-805293-m02) KVM machine creation complete!
	I0930 20:00:21.788252   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetConfigRaw
	I0930 20:00:21.788786   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:21.788970   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:21.789203   26315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 20:00:21.789220   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetState
	I0930 20:00:21.790562   26315 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 20:00:21.790578   26315 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 20:00:21.790584   26315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 20:00:21.790592   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:21.792832   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.793247   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:21.793275   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.793444   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:21.793624   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.793794   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.793936   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:21.794099   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:21.794370   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:21.794384   26315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 20:00:21.906923   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:00:21.906949   26315 main.go:141] libmachine: Detecting the provisioner...
	I0930 20:00:21.906961   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:21.910153   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.910565   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:21.910596   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.910764   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:21.910979   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.911241   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.911375   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:21.911534   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:21.911713   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:21.911726   26315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 20:00:22.024080   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 20:00:22.024153   26315 main.go:141] libmachine: found compatible host: buildroot
	I0930 20:00:22.024160   26315 main.go:141] libmachine: Provisioning with buildroot...
	I0930 20:00:22.024170   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:22.024471   26315 buildroot.go:166] provisioning hostname "ha-805293-m02"
	I0930 20:00:22.024504   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:22.024708   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.027328   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.027816   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.027846   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.028043   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.028244   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.028415   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.028559   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.028711   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.028924   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.028951   26315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293-m02 && echo "ha-805293-m02" | sudo tee /etc/hostname
	I0930 20:00:22.153517   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293-m02
	
	I0930 20:00:22.153558   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.156342   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.156867   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.156892   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.157066   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.157250   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.157398   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.157520   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.157658   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.157834   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.157856   26315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:00:22.280453   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:00:22.280490   26315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:00:22.280513   26315 buildroot.go:174] setting up certificates
	I0930 20:00:22.280524   26315 provision.go:84] configureAuth start
	I0930 20:00:22.280537   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:22.280873   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:22.283731   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.284096   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.284121   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.284311   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.286698   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.287078   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.287108   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.287262   26315 provision.go:143] copyHostCerts
	I0930 20:00:22.287296   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:00:22.287337   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:00:22.287351   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:00:22.287424   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:00:22.287503   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:00:22.287521   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:00:22.287557   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:00:22.287594   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:00:22.287648   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:00:22.287664   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:00:22.287668   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:00:22.287689   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:00:22.287737   26315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293-m02 san=[127.0.0.1 192.168.39.220 ha-805293-m02 localhost minikube]
	I0930 20:00:22.355076   26315 provision.go:177] copyRemoteCerts
	I0930 20:00:22.355131   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:00:22.355153   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.357993   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.358290   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.358317   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.358695   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.358872   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.358992   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.359090   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:22.445399   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 20:00:22.445470   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:00:22.469429   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 20:00:22.469516   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 20:00:22.492675   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 20:00:22.492763   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 20:00:22.515601   26315 provision.go:87] duration metric: took 235.062596ms to configureAuth
	I0930 20:00:22.515633   26315 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:00:22.515833   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:22.515926   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.518627   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.519062   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.519101   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.519248   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.519447   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.519617   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.519768   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.519918   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.520077   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.520090   26315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:00:22.744066   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:00:22.744092   26315 main.go:141] libmachine: Checking connection to Docker...
	I0930 20:00:22.744101   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetURL
	I0930 20:00:22.745446   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Using libvirt version 6000000
	I0930 20:00:22.747635   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.748132   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.748161   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.748303   26315 main.go:141] libmachine: Docker is up and running!
	I0930 20:00:22.748319   26315 main.go:141] libmachine: Reticulating splines...
	I0930 20:00:22.748327   26315 client.go:171] duration metric: took 20.804148382s to LocalClient.Create
	I0930 20:00:22.748348   26315 start.go:167] duration metric: took 20.804213197s to libmachine.API.Create "ha-805293"
	I0930 20:00:22.748357   26315 start.go:293] postStartSetup for "ha-805293-m02" (driver="kvm2")
	I0930 20:00:22.748367   26315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:00:22.748386   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:22.748624   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:00:22.748654   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.750830   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.751166   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.751190   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.751299   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.751468   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.751612   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.751720   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:22.837496   26315 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:00:22.841510   26315 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:00:22.841546   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:00:22.841623   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:00:22.841717   26315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:00:22.841730   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 20:00:22.841843   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:00:22.851144   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:00:22.877058   26315 start.go:296] duration metric: took 128.687557ms for postStartSetup
	I0930 20:00:22.877104   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetConfigRaw
	I0930 20:00:22.877761   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:22.880570   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.880908   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.880931   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.881333   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:00:22.881547   26315 start.go:128] duration metric: took 20.956931205s to createHost
	I0930 20:00:22.881569   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.883882   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.884228   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.884246   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.884419   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.884601   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.884779   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.884913   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.885087   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.885252   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.885264   26315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:00:23.000299   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727726422.960119850
	
	I0930 20:00:23.000326   26315 fix.go:216] guest clock: 1727726422.960119850
	I0930 20:00:23.000338   26315 fix.go:229] Guest: 2024-09-30 20:00:22.96011985 +0000 UTC Remote: 2024-09-30 20:00:22.881558413 +0000 UTC m=+66.452648359 (delta=78.561437ms)
	I0930 20:00:23.000357   26315 fix.go:200] guest clock delta is within tolerance: 78.561437ms
	I0930 20:00:23.000364   26315 start.go:83] releasing machines lock for "ha-805293-m02", held for 21.075876017s
	I0930 20:00:23.000382   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.000682   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:23.003439   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.003855   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:23.003882   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.006309   26315 out.go:177] * Found network options:
	I0930 20:00:23.008016   26315 out.go:177]   - NO_PROXY=192.168.39.3
	W0930 20:00:23.009484   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:00:23.009519   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.010257   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.010450   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.010558   26315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:00:23.010606   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	W0930 20:00:23.010646   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:00:23.010724   26315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:00:23.010747   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:23.013581   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.013752   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.013960   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:23.013983   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.014161   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:23.014186   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:23.014187   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.014404   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:23.014410   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:23.014563   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:23.014595   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:23.014659   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:23.014695   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:23.014791   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:23.259199   26315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:00:23.264710   26315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:00:23.264772   26315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:00:23.281650   26315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 20:00:23.281678   26315 start.go:495] detecting cgroup driver to use...
	I0930 20:00:23.281745   26315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:00:23.300954   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:00:23.318197   26315 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:00:23.318266   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:00:23.334729   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:00:23.351325   26315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:00:23.494840   26315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:00:23.659365   26315 docker.go:233] disabling docker service ...
	I0930 20:00:23.659442   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:00:23.673200   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:00:23.686244   26315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:00:23.816616   26315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:00:23.949421   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:00:23.963035   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:00:23.981793   26315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:00:23.981869   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:23.992506   26315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:00:23.992572   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.003215   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.013791   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.024890   26315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:00:24.036504   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.046845   26315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.063744   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.074710   26315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:00:24.084399   26315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 20:00:24.084456   26315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 20:00:24.097779   26315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:00:24.107679   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:00:24.245414   26315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:00:24.332691   26315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:00:24.332763   26315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:00:24.337609   26315 start.go:563] Will wait 60s for crictl version
	I0930 20:00:24.337672   26315 ssh_runner.go:195] Run: which crictl
	I0930 20:00:24.341369   26315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:00:24.379294   26315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:00:24.379384   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:00:24.407964   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:00:24.438040   26315 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:00:24.439799   26315 out.go:177]   - env NO_PROXY=192.168.39.3
	I0930 20:00:24.441127   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:24.443641   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:24.443999   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:24.444023   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:24.444256   26315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:00:24.448441   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:00:24.460479   26315 mustload.go:65] Loading cluster: ha-805293
	I0930 20:00:24.460673   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:24.460911   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:24.460946   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:24.475845   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0930 20:00:24.476505   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:24.476991   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:24.477013   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:24.477336   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:24.477545   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:24.479156   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:24.479566   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:24.479614   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:24.494163   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I0930 20:00:24.494690   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:24.495134   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:24.495156   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:24.495462   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:24.495672   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:24.495840   26315 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.220
	I0930 20:00:24.495854   26315 certs.go:194] generating shared ca certs ...
	I0930 20:00:24.495872   26315 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:24.495990   26315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:00:24.496030   26315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:00:24.496038   26315 certs.go:256] generating profile certs ...
	I0930 20:00:24.496099   26315 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 20:00:24.496121   26315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032
	I0930 20:00:24.496134   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.220 192.168.39.254]
	I0930 20:00:24.563341   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032 ...
	I0930 20:00:24.563370   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032: {Name:mk8534a0b1f65471035122400012ca9f075cb68b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:24.563553   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032 ...
	I0930 20:00:24.563580   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032: {Name:mkdff9b5cf02688bad7cef701430e9d45f427c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:24.563669   26315 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 20:00:24.563804   26315 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 20:00:24.563922   26315 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 20:00:24.563935   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 20:00:24.563949   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 20:00:24.563961   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 20:00:24.563971   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 20:00:24.563981   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 20:00:24.563992   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 20:00:24.564001   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 20:00:24.564012   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 20:00:24.564058   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:00:24.564087   26315 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:00:24.564096   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:00:24.564116   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:00:24.564137   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:00:24.564157   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:00:24.564196   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:00:24.564221   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 20:00:24.564233   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 20:00:24.564246   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:24.564276   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:24.567674   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:24.568209   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:24.568244   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:24.568458   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:24.568679   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:24.568859   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:24.569017   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:24.647988   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 20:00:24.652578   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 20:00:24.663570   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 20:00:24.667502   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 20:00:24.678300   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 20:00:24.682636   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 20:00:24.692556   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 20:00:24.697407   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0930 20:00:24.708600   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 20:00:24.716272   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 20:00:24.726239   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 20:00:24.730151   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0930 20:00:24.740007   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:00:24.764135   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:00:24.787511   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:00:24.811921   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:00:24.835050   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0930 20:00:24.858111   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 20:00:24.881164   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:00:24.905084   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 20:00:24.930204   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:00:24.954976   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:00:24.979893   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:00:25.004028   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 20:00:25.020509   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 20:00:25.037112   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 20:00:25.053614   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0930 20:00:25.069699   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 20:00:25.087062   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0930 20:00:25.103141   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 20:00:25.119089   26315 ssh_runner.go:195] Run: openssl version
	I0930 20:00:25.124587   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:00:25.135122   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:00:25.139645   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:00:25.139709   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:00:25.145556   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:00:25.156636   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:00:25.167339   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:00:25.171719   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:00:25.171780   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:00:25.177212   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:00:25.188055   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:00:25.199114   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:25.203444   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:25.203514   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:25.209227   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:00:25.220164   26315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:00:25.224532   26315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 20:00:25.224591   26315 kubeadm.go:934] updating node {m02 192.168.39.220 8443 v1.31.1 crio true true} ...
	I0930 20:00:25.224694   26315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:00:25.224719   26315 kube-vip.go:115] generating kube-vip config ...
	I0930 20:00:25.224757   26315 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 20:00:25.242207   26315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 20:00:25.242306   26315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 20:00:25.242370   26315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:00:25.253224   26315 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 20:00:25.253326   26315 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 20:00:25.264511   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 20:00:25.264547   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:00:25.264590   26315 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0930 20:00:25.264606   26315 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0930 20:00:25.264613   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:00:25.269385   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 20:00:25.269423   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 20:00:26.288255   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:00:26.288359   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:00:26.293355   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 20:00:26.293391   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 20:00:26.370842   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:00:26.408125   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:00:26.408233   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:00:26.414764   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 20:00:26.414804   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 20:00:26.848584   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 20:00:26.858015   26315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 20:00:26.874053   26315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:00:26.890616   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 20:00:26.906680   26315 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 20:00:26.910431   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:00:26.921656   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:00:27.039123   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:00:27.056773   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:27.057124   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:27.057173   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:27.072237   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I0930 20:00:27.072852   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:27.073292   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:27.073321   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:27.073651   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:27.073859   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:27.073989   26315 start.go:317] joinCluster: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:00:27.074091   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 20:00:27.074108   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:27.076745   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:27.077111   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:27.077130   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:27.077207   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:27.077370   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:27.077633   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:27.077784   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:27.230308   26315 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:27.230355   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cnuzai.6xkseww2aia5hxhb --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m02 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443"
	I0930 20:00:50.312960   26315 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cnuzai.6xkseww2aia5hxhb --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m02 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443": (23.082567099s)
	I0930 20:00:50.313004   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 20:00:50.837990   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-805293-m02 minikube.k8s.io/updated_at=2024_09_30T20_00_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=ha-805293 minikube.k8s.io/primary=false
	I0930 20:00:50.975697   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-805293-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 20:00:51.102316   26315 start.go:319] duration metric: took 24.028319202s to joinCluster
	I0930 20:00:51.102444   26315 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:51.102695   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:51.104462   26315 out.go:177] * Verifying Kubernetes components...
	I0930 20:00:51.105980   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:00:51.368169   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:00:51.414670   26315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:00:51.415012   26315 kapi.go:59] client config for ha-805293: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 20:00:51.415098   26315 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.3:8443
	I0930 20:00:51.415444   26315 node_ready.go:35] waiting up to 6m0s for node "ha-805293-m02" to be "Ready" ...
	I0930 20:00:51.415604   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:51.415616   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:51.415627   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:51.415634   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:51.426106   26315 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 20:00:51.915725   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:51.915750   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:51.915764   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:51.915771   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:51.920139   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:52.416072   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:52.416092   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:52.416100   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:52.416104   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:52.419738   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:52.915687   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:52.915720   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:52.915733   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:52.915739   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:52.920070   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:53.415992   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:53.416013   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:53.416021   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:53.416027   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:53.419709   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:53.420257   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:00:53.915641   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:53.915662   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:53.915670   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:53.915675   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:53.918936   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:54.415947   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:54.415969   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:54.415978   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:54.415983   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:54.419470   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:54.916559   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:54.916594   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:54.916604   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:54.916609   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:54.920769   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:55.415723   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:55.415749   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:55.415760   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:55.415767   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:55.419960   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:55.420655   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:00:55.915703   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:55.915725   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:55.915732   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:55.915737   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:55.918792   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:56.415726   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:56.415759   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:56.415768   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:56.415771   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:56.419845   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:56.915720   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:56.915749   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:56.915761   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:56.915768   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:56.919114   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:57.415890   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:57.415920   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:57.415930   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:57.415936   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:57.419326   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:57.916001   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:57.916024   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:57.916032   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:57.916036   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:57.919385   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:57.920066   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:00:58.416036   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:58.416058   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:58.416066   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:58.416071   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:58.444113   26315 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0930 20:00:58.915821   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:58.915851   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:58.915865   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:58.915872   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:58.919943   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:59.415861   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:59.415883   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:59.415892   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:59.415896   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:59.419554   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:59.916644   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:59.916665   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:59.916673   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:59.916681   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:59.920228   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:59.920834   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:00.415729   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:00.415764   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:00.415772   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:00.415777   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:00.419232   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:00.915725   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:00.915748   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:00.915758   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:00.915764   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:00.920882   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:01:01.416215   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:01.416240   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:01.416249   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:01.416252   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:01.419889   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:01.916651   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:01.916673   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:01.916680   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:01.916686   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:01.920422   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:01.920906   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:02.416417   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:02.416447   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:02.416458   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:02.416465   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:02.420384   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:02.916614   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:02.916639   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:02.916647   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:02.916651   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:02.920435   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:03.416222   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:03.416246   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:03.416255   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:03.416258   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:03.419787   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:03.915698   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:03.915726   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:03.915735   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:03.915739   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:03.919427   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:04.415764   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:04.415788   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:04.415797   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:04.415801   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:04.419012   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:04.419574   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:04.915824   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:04.915846   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:04.915855   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:04.915859   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:04.920091   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:05.415756   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:05.415780   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:05.415787   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:05.415791   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:05.421271   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:01:05.915718   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:05.915739   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:05.915747   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:05.915751   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:05.919141   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:06.415741   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:06.415762   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:06.415770   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:06.415774   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:06.418886   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:06.419650   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:06.916104   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:06.916133   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:06.916144   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:06.916149   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:06.919406   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:07.416605   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:07.416630   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:07.416639   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:07.416646   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:07.419940   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:07.915753   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:07.915780   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:07.915790   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:07.915795   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:07.919449   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:08.416606   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:08.416630   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:08.416638   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:08.416643   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:08.420794   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:08.421339   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:08.915715   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:08.915738   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:08.915746   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:08.915752   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:08.919389   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.416586   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:09.416611   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.416621   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.416628   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.419914   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.916640   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:09.916661   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.916669   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.916673   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.919743   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.920355   26315 node_ready.go:49] node "ha-805293-m02" has status "Ready":"True"
	I0930 20:01:09.920385   26315 node_ready.go:38] duration metric: took 18.504913608s for node "ha-805293-m02" to be "Ready" ...
	I0930 20:01:09.920395   26315 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:01:09.920461   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:09.920470   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.920477   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.920481   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.924944   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:09.930623   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.930723   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-x7zjp
	I0930 20:01:09.930731   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.930739   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.930743   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.933787   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.934467   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:09.934486   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.934497   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.934502   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.936935   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.937372   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.937389   26315 pod_ready.go:82] duration metric: took 6.738618ms for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.937399   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.937452   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-z4bkv
	I0930 20:01:09.937460   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.937467   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.937471   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.939718   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.940345   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:09.940360   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.940367   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.940372   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.942825   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.943347   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.943362   26315 pod_ready.go:82] duration metric: took 5.957941ms for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.943374   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.943449   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293
	I0930 20:01:09.943477   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.943493   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.943502   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.946145   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.946815   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:09.946829   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.946837   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.946841   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.949619   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.950200   26315 pod_ready.go:93] pod "etcd-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.950222   26315 pod_ready.go:82] duration metric: took 6.836708ms for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.950233   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.950305   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m02
	I0930 20:01:09.950326   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.950334   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.950340   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.953306   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.953792   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:09.953806   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.953813   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.953817   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.956400   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.956812   26315 pod_ready.go:93] pod "etcd-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.956829   26315 pod_ready.go:82] duration metric: took 6.588184ms for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.956845   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.117233   26315 request.go:632] Waited for 160.320722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:01:10.117300   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:01:10.117306   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.117318   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.117324   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.120940   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.317057   26315 request.go:632] Waited for 195.415809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:10.317127   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:10.317135   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.317156   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.317180   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.320648   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.321373   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:10.321392   26315 pod_ready.go:82] duration metric: took 364.537566ms for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.321402   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.517507   26315 request.go:632] Waited for 196.023112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:01:10.517576   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:01:10.517583   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.517594   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.517601   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.521299   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.717299   26315 request.go:632] Waited for 195.382491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:10.717366   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:10.717372   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.717379   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.717384   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.720883   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.721468   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:10.721488   26315 pod_ready.go:82] duration metric: took 400.07752ms for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.721497   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.917490   26315 request.go:632] Waited for 195.929177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:01:10.917554   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:01:10.917574   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.917606   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.917617   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.921610   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.116693   26315 request.go:632] Waited for 194.297174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.116753   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.116759   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.116766   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.116769   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.120537   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.121044   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:11.121062   26315 pod_ready.go:82] duration metric: took 399.55959ms for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.121074   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.317266   26315 request.go:632] Waited for 196.133826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:01:11.317335   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:01:11.317342   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.317351   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.317358   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.321265   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.517020   26315 request.go:632] Waited for 195.154322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:11.517082   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:11.517089   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.517098   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.517103   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.520779   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.521296   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:11.521319   26315 pod_ready.go:82] duration metric: took 400.238082ms for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.521335   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.716800   26315 request.go:632] Waited for 195.390285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:01:11.716888   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:01:11.716896   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.716906   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.716911   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.720246   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.917422   26315 request.go:632] Waited for 196.372605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.917500   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.917508   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.917518   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.917526   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.921353   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.921887   26315 pod_ready.go:93] pod "kube-proxy-6gnt4" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:11.921912   26315 pod_ready.go:82] duration metric: took 400.568991ms for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.921925   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.116927   26315 request.go:632] Waited for 194.932043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:01:12.117009   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:01:12.117015   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.117022   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.117026   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.121372   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:12.317480   26315 request.go:632] Waited for 195.395103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:12.317541   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:12.317546   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.317553   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.317556   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.321223   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:12.321777   26315 pod_ready.go:93] pod "kube-proxy-vptrg" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:12.321796   26315 pod_ready.go:82] duration metric: took 399.864157ms for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.321806   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.516927   26315 request.go:632] Waited for 195.058252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:01:12.517009   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:01:12.517015   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.517022   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.517029   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.520681   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:12.717635   26315 request.go:632] Waited for 196.390201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:12.717694   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:12.717698   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.717706   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.717714   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.721311   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:12.721886   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:12.721903   26315 pod_ready.go:82] duration metric: took 400.091381ms for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.721913   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.917094   26315 request.go:632] Waited for 195.106579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:01:12.917184   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:01:12.917193   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.917203   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.917212   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.921090   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:13.117142   26315 request.go:632] Waited for 195.345819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:13.117216   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:13.117221   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.117229   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.117232   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.120777   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:13.121215   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:13.121232   26315 pod_ready.go:82] duration metric: took 399.313081ms for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:13.121242   26315 pod_ready.go:39] duration metric: took 3.200834368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:01:13.121266   26315 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:01:13.121324   26315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:01:13.137767   26315 api_server.go:72] duration metric: took 22.035280113s to wait for apiserver process to appear ...
	I0930 20:01:13.137797   26315 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:01:13.137828   26315 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0930 20:01:13.141994   26315 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I0930 20:01:13.142067   26315 round_trippers.go:463] GET https://192.168.39.3:8443/version
	I0930 20:01:13.142074   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.142082   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.142090   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.142859   26315 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 20:01:13.142975   26315 api_server.go:141] control plane version: v1.31.1
	I0930 20:01:13.142993   26315 api_server.go:131] duration metric: took 5.190596ms to wait for apiserver health ...
	I0930 20:01:13.143001   26315 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:01:13.317422   26315 request.go:632] Waited for 174.359049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.317472   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.317478   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.317484   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.317488   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.321962   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:13.326370   26315 system_pods.go:59] 17 kube-system pods found
	I0930 20:01:13.326406   26315 system_pods.go:61] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:01:13.326411   26315 system_pods.go:61] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:01:13.326415   26315 system_pods.go:61] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:01:13.326420   26315 system_pods.go:61] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:01:13.326425   26315 system_pods.go:61] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:01:13.326429   26315 system_pods.go:61] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:01:13.326432   26315 system_pods.go:61] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:01:13.326435   26315 system_pods.go:61] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:01:13.326438   26315 system_pods.go:61] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:01:13.326442   26315 system_pods.go:61] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:01:13.326445   26315 system_pods.go:61] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:01:13.326448   26315 system_pods.go:61] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:01:13.326451   26315 system_pods.go:61] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:01:13.326454   26315 system_pods.go:61] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:01:13.326457   26315 system_pods.go:61] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:01:13.326459   26315 system_pods.go:61] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:01:13.326462   26315 system_pods.go:61] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:01:13.326467   26315 system_pods.go:74] duration metric: took 183.46129ms to wait for pod list to return data ...
	I0930 20:01:13.326477   26315 default_sa.go:34] waiting for default service account to be created ...
	I0930 20:01:13.516843   26315 request.go:632] Waited for 190.295336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:01:13.516914   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:01:13.516919   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.516926   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.516929   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.520919   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:13.521167   26315 default_sa.go:45] found service account: "default"
	I0930 20:01:13.521184   26315 default_sa.go:55] duration metric: took 194.701824ms for default service account to be created ...
	I0930 20:01:13.521193   26315 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 20:01:13.717380   26315 request.go:632] Waited for 196.119354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.717451   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.717458   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.717467   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.717471   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.722690   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:01:13.727139   26315 system_pods.go:86] 17 kube-system pods found
	I0930 20:01:13.727168   26315 system_pods.go:89] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:01:13.727174   26315 system_pods.go:89] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:01:13.727179   26315 system_pods.go:89] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:01:13.727184   26315 system_pods.go:89] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:01:13.727188   26315 system_pods.go:89] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:01:13.727193   26315 system_pods.go:89] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:01:13.727198   26315 system_pods.go:89] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:01:13.727204   26315 system_pods.go:89] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:01:13.727209   26315 system_pods.go:89] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:01:13.727217   26315 system_pods.go:89] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:01:13.727230   26315 system_pods.go:89] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:01:13.727235   26315 system_pods.go:89] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:01:13.727241   26315 system_pods.go:89] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:01:13.727247   26315 system_pods.go:89] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:01:13.727252   26315 system_pods.go:89] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:01:13.727257   26315 system_pods.go:89] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:01:13.727261   26315 system_pods.go:89] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:01:13.727270   26315 system_pods.go:126] duration metric: took 206.072644ms to wait for k8s-apps to be running ...
	I0930 20:01:13.727277   26315 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 20:01:13.727327   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:01:13.741981   26315 system_svc.go:56] duration metric: took 14.693769ms WaitForService to wait for kubelet
	I0930 20:01:13.742010   26315 kubeadm.go:582] duration metric: took 22.639532003s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:01:13.742027   26315 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:01:13.917345   26315 request.go:632] Waited for 175.232926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes
	I0930 20:01:13.917397   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I0930 20:01:13.917402   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.917410   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.917413   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.921853   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:13.922642   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:01:13.922674   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:01:13.922690   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:01:13.922694   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:01:13.922699   26315 node_conditions.go:105] duration metric: took 180.667513ms to run NodePressure ...
	I0930 20:01:13.922708   26315 start.go:241] waiting for startup goroutines ...
	I0930 20:01:13.922733   26315 start.go:255] writing updated cluster config ...
	I0930 20:01:13.925048   26315 out.go:201] 
	I0930 20:01:13.926843   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:01:13.926954   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:01:13.928893   26315 out.go:177] * Starting "ha-805293-m03" control-plane node in "ha-805293" cluster
	I0930 20:01:13.930308   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:01:13.930336   26315 cache.go:56] Caching tarball of preloaded images
	I0930 20:01:13.930467   26315 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:01:13.930485   26315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:01:13.930582   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:01:13.930765   26315 start.go:360] acquireMachinesLock for ha-805293-m03: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:01:13.930817   26315 start.go:364] duration metric: took 28.082µs to acquireMachinesLock for "ha-805293-m03"
	I0930 20:01:13.930836   26315 start.go:93] Provisioning new machine with config: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:01:13.930923   26315 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0930 20:01:13.932766   26315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 20:01:13.932890   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:13.932929   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:13.949248   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
	I0930 20:01:13.949763   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:13.950280   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:13.950304   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:13.950634   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:13.950970   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:13.951189   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:13.951448   26315 start.go:159] libmachine.API.Create for "ha-805293" (driver="kvm2")
	I0930 20:01:13.951489   26315 client.go:168] LocalClient.Create starting
	I0930 20:01:13.951565   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 20:01:13.951611   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:01:13.951631   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:01:13.951696   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 20:01:13.951724   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:01:13.951742   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:01:13.951770   26315 main.go:141] libmachine: Running pre-create checks...
	I0930 20:01:13.951780   26315 main.go:141] libmachine: (ha-805293-m03) Calling .PreCreateCheck
	I0930 20:01:13.951958   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetConfigRaw
	I0930 20:01:13.952389   26315 main.go:141] libmachine: Creating machine...
	I0930 20:01:13.952404   26315 main.go:141] libmachine: (ha-805293-m03) Calling .Create
	I0930 20:01:13.952539   26315 main.go:141] libmachine: (ha-805293-m03) Creating KVM machine...
	I0930 20:01:13.953896   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found existing default KVM network
	I0930 20:01:13.954082   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found existing private KVM network mk-ha-805293
	I0930 20:01:13.954276   26315 main.go:141] libmachine: (ha-805293-m03) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03 ...
	I0930 20:01:13.954303   26315 main.go:141] libmachine: (ha-805293-m03) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 20:01:13.954425   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:13.954267   27054 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:01:13.954521   26315 main.go:141] libmachine: (ha-805293-m03) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 20:01:14.186819   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:14.186689   27054 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa...
	I0930 20:01:14.467265   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:14.467127   27054 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/ha-805293-m03.rawdisk...
	I0930 20:01:14.467311   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Writing magic tar header
	I0930 20:01:14.467327   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Writing SSH key tar header
	I0930 20:01:14.467340   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:14.467280   27054 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03 ...
	I0930 20:01:14.467434   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03
	I0930 20:01:14.467495   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03 (perms=drwx------)
	I0930 20:01:14.467509   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 20:01:14.467520   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:01:14.467545   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 20:01:14.467563   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 20:01:14.467577   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 20:01:14.467590   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 20:01:14.467603   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins
	I0930 20:01:14.467614   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home
	I0930 20:01:14.467622   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Skipping /home - not owner
	I0930 20:01:14.467636   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 20:01:14.467659   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 20:01:14.467677   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 20:01:14.467702   26315 main.go:141] libmachine: (ha-805293-m03) Creating domain...
	I0930 20:01:14.468847   26315 main.go:141] libmachine: (ha-805293-m03) define libvirt domain using xml: 
	I0930 20:01:14.468871   26315 main.go:141] libmachine: (ha-805293-m03) <domain type='kvm'>
	I0930 20:01:14.468881   26315 main.go:141] libmachine: (ha-805293-m03)   <name>ha-805293-m03</name>
	I0930 20:01:14.468899   26315 main.go:141] libmachine: (ha-805293-m03)   <memory unit='MiB'>2200</memory>
	I0930 20:01:14.468932   26315 main.go:141] libmachine: (ha-805293-m03)   <vcpu>2</vcpu>
	I0930 20:01:14.468950   26315 main.go:141] libmachine: (ha-805293-m03)   <features>
	I0930 20:01:14.468968   26315 main.go:141] libmachine: (ha-805293-m03)     <acpi/>
	I0930 20:01:14.468978   26315 main.go:141] libmachine: (ha-805293-m03)     <apic/>
	I0930 20:01:14.469001   26315 main.go:141] libmachine: (ha-805293-m03)     <pae/>
	I0930 20:01:14.469014   26315 main.go:141] libmachine: (ha-805293-m03)     
	I0930 20:01:14.469041   26315 main.go:141] libmachine: (ha-805293-m03)   </features>
	I0930 20:01:14.469062   26315 main.go:141] libmachine: (ha-805293-m03)   <cpu mode='host-passthrough'>
	I0930 20:01:14.469074   26315 main.go:141] libmachine: (ha-805293-m03)   
	I0930 20:01:14.469080   26315 main.go:141] libmachine: (ha-805293-m03)   </cpu>
	I0930 20:01:14.469091   26315 main.go:141] libmachine: (ha-805293-m03)   <os>
	I0930 20:01:14.469107   26315 main.go:141] libmachine: (ha-805293-m03)     <type>hvm</type>
	I0930 20:01:14.469115   26315 main.go:141] libmachine: (ha-805293-m03)     <boot dev='cdrom'/>
	I0930 20:01:14.469124   26315 main.go:141] libmachine: (ha-805293-m03)     <boot dev='hd'/>
	I0930 20:01:14.469143   26315 main.go:141] libmachine: (ha-805293-m03)     <bootmenu enable='no'/>
	I0930 20:01:14.469154   26315 main.go:141] libmachine: (ha-805293-m03)   </os>
	I0930 20:01:14.469164   26315 main.go:141] libmachine: (ha-805293-m03)   <devices>
	I0930 20:01:14.469248   26315 main.go:141] libmachine: (ha-805293-m03)     <disk type='file' device='cdrom'>
	I0930 20:01:14.469284   26315 main.go:141] libmachine: (ha-805293-m03)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/boot2docker.iso'/>
	I0930 20:01:14.469299   26315 main.go:141] libmachine: (ha-805293-m03)       <target dev='hdc' bus='scsi'/>
	I0930 20:01:14.469305   26315 main.go:141] libmachine: (ha-805293-m03)       <readonly/>
	I0930 20:01:14.469314   26315 main.go:141] libmachine: (ha-805293-m03)     </disk>
	I0930 20:01:14.469321   26315 main.go:141] libmachine: (ha-805293-m03)     <disk type='file' device='disk'>
	I0930 20:01:14.469350   26315 main.go:141] libmachine: (ha-805293-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 20:01:14.469366   26315 main.go:141] libmachine: (ha-805293-m03)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/ha-805293-m03.rawdisk'/>
	I0930 20:01:14.469381   26315 main.go:141] libmachine: (ha-805293-m03)       <target dev='hda' bus='virtio'/>
	I0930 20:01:14.469387   26315 main.go:141] libmachine: (ha-805293-m03)     </disk>
	I0930 20:01:14.469400   26315 main.go:141] libmachine: (ha-805293-m03)     <interface type='network'>
	I0930 20:01:14.469410   26315 main.go:141] libmachine: (ha-805293-m03)       <source network='mk-ha-805293'/>
	I0930 20:01:14.469421   26315 main.go:141] libmachine: (ha-805293-m03)       <model type='virtio'/>
	I0930 20:01:14.469427   26315 main.go:141] libmachine: (ha-805293-m03)     </interface>
	I0930 20:01:14.469437   26315 main.go:141] libmachine: (ha-805293-m03)     <interface type='network'>
	I0930 20:01:14.469456   26315 main.go:141] libmachine: (ha-805293-m03)       <source network='default'/>
	I0930 20:01:14.469482   26315 main.go:141] libmachine: (ha-805293-m03)       <model type='virtio'/>
	I0930 20:01:14.469512   26315 main.go:141] libmachine: (ha-805293-m03)     </interface>
	I0930 20:01:14.469521   26315 main.go:141] libmachine: (ha-805293-m03)     <serial type='pty'>
	I0930 20:01:14.469540   26315 main.go:141] libmachine: (ha-805293-m03)       <target port='0'/>
	I0930 20:01:14.469572   26315 main.go:141] libmachine: (ha-805293-m03)     </serial>
	I0930 20:01:14.469589   26315 main.go:141] libmachine: (ha-805293-m03)     <console type='pty'>
	I0930 20:01:14.469603   26315 main.go:141] libmachine: (ha-805293-m03)       <target type='serial' port='0'/>
	I0930 20:01:14.469614   26315 main.go:141] libmachine: (ha-805293-m03)     </console>
	I0930 20:01:14.469623   26315 main.go:141] libmachine: (ha-805293-m03)     <rng model='virtio'>
	I0930 20:01:14.469631   26315 main.go:141] libmachine: (ha-805293-m03)       <backend model='random'>/dev/random</backend>
	I0930 20:01:14.469642   26315 main.go:141] libmachine: (ha-805293-m03)     </rng>
	I0930 20:01:14.469648   26315 main.go:141] libmachine: (ha-805293-m03)     
	I0930 20:01:14.469658   26315 main.go:141] libmachine: (ha-805293-m03)     
	I0930 20:01:14.469664   26315 main.go:141] libmachine: (ha-805293-m03)   </devices>
	I0930 20:01:14.469672   26315 main.go:141] libmachine: (ha-805293-m03) </domain>
	I0930 20:01:14.469677   26315 main.go:141] libmachine: (ha-805293-m03) 
	I0930 20:01:14.476673   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:7e:5d:5f in network default
	I0930 20:01:14.477269   26315 main.go:141] libmachine: (ha-805293-m03) Ensuring networks are active...
	I0930 20:01:14.477295   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:14.478121   26315 main.go:141] libmachine: (ha-805293-m03) Ensuring network default is active
	I0930 20:01:14.478526   26315 main.go:141] libmachine: (ha-805293-m03) Ensuring network mk-ha-805293 is active
	I0930 20:01:14.478957   26315 main.go:141] libmachine: (ha-805293-m03) Getting domain xml...
	I0930 20:01:14.479718   26315 main.go:141] libmachine: (ha-805293-m03) Creating domain...
	I0930 20:01:15.747292   26315 main.go:141] libmachine: (ha-805293-m03) Waiting to get IP...
	I0930 20:01:15.748220   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:15.748679   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:15.748743   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:15.748666   27054 retry.go:31] will retry after 284.785124ms: waiting for machine to come up
	I0930 20:01:16.035256   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:16.035716   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:16.035831   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:16.035661   27054 retry.go:31] will retry after 335.488124ms: waiting for machine to come up
	I0930 20:01:16.373109   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:16.373683   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:16.373706   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:16.373645   27054 retry.go:31] will retry after 461.768045ms: waiting for machine to come up
	I0930 20:01:16.837400   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:16.837942   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:16.838002   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:16.837899   27054 retry.go:31] will retry after 451.939776ms: waiting for machine to come up
	I0930 20:01:17.291224   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:17.291638   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:17.291662   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:17.291600   27054 retry.go:31] will retry after 601.468058ms: waiting for machine to come up
	I0930 20:01:17.894045   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:17.894474   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:17.894502   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:17.894444   27054 retry.go:31] will retry after 685.014003ms: waiting for machine to come up
	I0930 20:01:18.581469   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:18.581905   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:18.581940   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:18.581886   27054 retry.go:31] will retry after 901.632295ms: waiting for machine to come up
	I0930 20:01:19.485606   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:19.486144   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:19.486174   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:19.486068   27054 retry.go:31] will retry after 1.002316049s: waiting for machine to come up
	I0930 20:01:20.489568   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:20.490064   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:20.490086   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:20.490017   27054 retry.go:31] will retry after 1.384559526s: waiting for machine to come up
	I0930 20:01:21.875542   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:21.875885   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:21.875904   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:21.875821   27054 retry.go:31] will retry after 1.560882287s: waiting for machine to come up
	I0930 20:01:23.438575   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:23.439019   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:23.439051   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:23.438971   27054 retry.go:31] will retry after 1.966635221s: waiting for machine to come up
	I0930 20:01:25.407626   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:25.408136   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:25.408170   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:25.408088   27054 retry.go:31] will retry after 2.861827785s: waiting for machine to come up
	I0930 20:01:28.272997   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:28.273395   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:28.273417   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:28.273357   27054 retry.go:31] will retry after 2.760760648s: waiting for machine to come up
	I0930 20:01:31.035244   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:31.035758   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:31.035806   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:31.035729   27054 retry.go:31] will retry after 3.889423891s: waiting for machine to come up
	I0930 20:01:34.927053   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:34.927650   26315 main.go:141] libmachine: (ha-805293-m03) Found IP for machine: 192.168.39.227
	I0930 20:01:34.927682   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has current primary IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:34.927690   26315 main.go:141] libmachine: (ha-805293-m03) Reserving static IP address...
	I0930 20:01:34.928071   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find host DHCP lease matching {name: "ha-805293-m03", mac: "52:54:00:ce:66:df", ip: "192.168.39.227"} in network mk-ha-805293
	I0930 20:01:35.005095   26315 main.go:141] libmachine: (ha-805293-m03) Reserved static IP address: 192.168.39.227
	I0930 20:01:35.005128   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Getting to WaitForSSH function...
	I0930 20:01:35.005135   26315 main.go:141] libmachine: (ha-805293-m03) Waiting for SSH to be available...
	I0930 20:01:35.007521   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.008053   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.008080   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.008244   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Using SSH client type: external
	I0930 20:01:35.008262   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa (-rw-------)
	I0930 20:01:35.008294   26315 main.go:141] libmachine: (ha-805293-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 20:01:35.008309   26315 main.go:141] libmachine: (ha-805293-m03) DBG | About to run SSH command:
	I0930 20:01:35.008328   26315 main.go:141] libmachine: (ha-805293-m03) DBG | exit 0
	I0930 20:01:35.131490   26315 main.go:141] libmachine: (ha-805293-m03) DBG | SSH cmd err, output: <nil>: 
	I0930 20:01:35.131786   26315 main.go:141] libmachine: (ha-805293-m03) KVM machine creation complete!
	I0930 20:01:35.132088   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetConfigRaw
	I0930 20:01:35.132882   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:35.133160   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:35.133330   26315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 20:01:35.133343   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetState
	I0930 20:01:35.134758   26315 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 20:01:35.134778   26315 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 20:01:35.134789   26315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 20:01:35.134797   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.137025   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.137368   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.137394   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.137501   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.137683   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.137839   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.137997   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.138162   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.138394   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.138405   26315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 20:01:35.238733   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:01:35.238763   26315 main.go:141] libmachine: Detecting the provisioner...
	I0930 20:01:35.238775   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.242022   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.242527   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.242562   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.242839   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.243050   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.243235   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.243427   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.243630   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.243832   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.243850   26315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 20:01:35.348183   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 20:01:35.348252   26315 main.go:141] libmachine: found compatible host: buildroot
	I0930 20:01:35.348261   26315 main.go:141] libmachine: Provisioning with buildroot...
	I0930 20:01:35.348268   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:35.348498   26315 buildroot.go:166] provisioning hostname "ha-805293-m03"
	I0930 20:01:35.348524   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:35.348749   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.351890   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.352398   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.352424   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.352577   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.352756   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.352894   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.353007   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.353167   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.353367   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.353384   26315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293-m03 && echo "ha-805293-m03" | sudo tee /etc/hostname
	I0930 20:01:35.473967   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293-m03
	
	I0930 20:01:35.473997   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.476729   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.477054   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.477085   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.477369   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.477567   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.477748   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.477907   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.478077   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.478253   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.478270   26315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:01:35.591650   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:01:35.591680   26315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:01:35.591697   26315 buildroot.go:174] setting up certificates
	I0930 20:01:35.591707   26315 provision.go:84] configureAuth start
	I0930 20:01:35.591715   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:35.591952   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:35.594901   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.595262   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.595286   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.595420   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.598100   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.598602   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.598626   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.598829   26315 provision.go:143] copyHostCerts
	I0930 20:01:35.598868   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:01:35.598917   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:01:35.598931   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:01:35.599012   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:01:35.599111   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:01:35.599134   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:01:35.599141   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:01:35.599179   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:01:35.599243   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:01:35.599270   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:01:35.599279   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:01:35.599331   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:01:35.599408   26315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293-m03 san=[127.0.0.1 192.168.39.227 ha-805293-m03 localhost minikube]
	I0930 20:01:35.796149   26315 provision.go:177] copyRemoteCerts
	I0930 20:01:35.796206   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:01:35.796242   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.798946   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.799340   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.799368   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.799648   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.799848   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.800023   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.800180   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:35.882427   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 20:01:35.882508   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:01:35.906794   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 20:01:35.906860   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 20:01:35.932049   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 20:01:35.932131   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 20:01:35.957426   26315 provision.go:87] duration metric: took 365.707269ms to configureAuth
	I0930 20:01:35.957459   26315 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:01:35.957679   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:01:35.957795   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.960499   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.960961   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.960996   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.961176   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.961403   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.961575   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.961765   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.961966   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.962139   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.962153   26315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:01:36.182253   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:01:36.182280   26315 main.go:141] libmachine: Checking connection to Docker...
	I0930 20:01:36.182288   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetURL
	I0930 20:01:36.183907   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Using libvirt version 6000000
	I0930 20:01:36.186215   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.186549   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.186590   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.186762   26315 main.go:141] libmachine: Docker is up and running!
	I0930 20:01:36.186776   26315 main.go:141] libmachine: Reticulating splines...
	I0930 20:01:36.186783   26315 client.go:171] duration metric: took 22.235285837s to LocalClient.Create
	I0930 20:01:36.186801   26315 start.go:167] duration metric: took 22.235357522s to libmachine.API.Create "ha-805293"
	I0930 20:01:36.186810   26315 start.go:293] postStartSetup for "ha-805293-m03" (driver="kvm2")
	I0930 20:01:36.186826   26315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:01:36.186842   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.187054   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:01:36.187077   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:36.189228   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.189551   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.189577   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.189754   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.189932   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.190098   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.190211   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:36.269942   26315 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:01:36.274174   26315 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:01:36.274204   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:01:36.274281   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:01:36.274373   26315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:01:36.274383   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 20:01:36.274490   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:01:36.284037   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:01:36.308961   26315 start.go:296] duration metric: took 122.135978ms for postStartSetup
	I0930 20:01:36.309010   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetConfigRaw
	I0930 20:01:36.309613   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:36.312777   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.313257   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.313307   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.313687   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:01:36.313894   26315 start.go:128] duration metric: took 22.382961104s to createHost
	I0930 20:01:36.313917   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:36.316229   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.316599   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.316627   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.316783   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.316957   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.317109   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.317219   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.317366   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:36.317526   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:36.317537   26315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:01:36.419858   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727726496.392744661
	
	I0930 20:01:36.419877   26315 fix.go:216] guest clock: 1727726496.392744661
	I0930 20:01:36.419884   26315 fix.go:229] Guest: 2024-09-30 20:01:36.392744661 +0000 UTC Remote: 2024-09-30 20:01:36.313905276 +0000 UTC m=+139.884995221 (delta=78.839385ms)
	I0930 20:01:36.419899   26315 fix.go:200] guest clock delta is within tolerance: 78.839385ms
	I0930 20:01:36.419904   26315 start.go:83] releasing machines lock for "ha-805293-m03", held for 22.489079696s
	I0930 20:01:36.419932   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.420201   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:36.422678   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.423024   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.423063   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.425360   26315 out.go:177] * Found network options:
	I0930 20:01:36.426711   26315 out.go:177]   - NO_PROXY=192.168.39.3,192.168.39.220
	W0930 20:01:36.427962   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 20:01:36.427990   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:01:36.428012   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.428657   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.428857   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.428967   26315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:01:36.429007   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	W0930 20:01:36.429092   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 20:01:36.429124   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:01:36.429190   26315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:01:36.429211   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:36.431941   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432202   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432300   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.432322   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432458   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.432598   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.432659   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.432683   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432755   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.432845   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.432915   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:36.432995   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.433083   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.433164   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:36.661994   26315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:01:36.669285   26315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:01:36.669354   26315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:01:36.686879   26315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 20:01:36.686911   26315 start.go:495] detecting cgroup driver to use...
	I0930 20:01:36.687008   26315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:01:36.703695   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:01:36.717831   26315 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:01:36.717898   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:01:36.732194   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:01:36.746205   26315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:01:36.873048   26315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:01:37.031067   26315 docker.go:233] disabling docker service ...
	I0930 20:01:37.031142   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:01:37.047034   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:01:37.059962   26315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:01:37.191501   26315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:01:37.302357   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:01:37.316910   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:01:37.336669   26315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:01:37.336739   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.347286   26315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:01:37.347361   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.357984   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.368059   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.379248   26315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:01:37.390460   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.401206   26315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.418758   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.428841   26315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:01:37.438255   26315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 20:01:37.438328   26315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 20:01:37.451070   26315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:01:37.460818   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:01:37.578097   26315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:01:37.670992   26315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:01:37.671072   26315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:01:37.675792   26315 start.go:563] Will wait 60s for crictl version
	I0930 20:01:37.675847   26315 ssh_runner.go:195] Run: which crictl
	I0930 20:01:37.679190   26315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:01:37.718042   26315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:01:37.718121   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:01:37.745873   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:01:37.774031   26315 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:01:37.775415   26315 out.go:177]   - env NO_PROXY=192.168.39.3
	I0930 20:01:37.776644   26315 out.go:177]   - env NO_PROXY=192.168.39.3,192.168.39.220
	I0930 20:01:37.777763   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:37.780596   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:37.780948   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:37.780970   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:37.781145   26315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:01:37.785213   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:01:37.797526   26315 mustload.go:65] Loading cluster: ha-805293
	I0930 20:01:37.797767   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:01:37.798120   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:37.798167   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:37.813162   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46385
	I0930 20:01:37.813567   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:37.814037   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:37.814052   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:37.814397   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:37.814604   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:01:37.816041   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:01:37.816336   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:37.816371   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:37.831585   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37645
	I0930 20:01:37.832045   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:37.832532   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:37.832557   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:37.832860   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:37.833026   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:01:37.833192   26315 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.227
	I0930 20:01:37.833209   26315 certs.go:194] generating shared ca certs ...
	I0930 20:01:37.833229   26315 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:01:37.833416   26315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:01:37.833471   26315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:01:37.833484   26315 certs.go:256] generating profile certs ...
	I0930 20:01:37.833587   26315 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 20:01:37.833619   26315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55
	I0930 20:01:37.833638   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.220 192.168.39.227 192.168.39.254]
	I0930 20:01:38.116566   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55 ...
	I0930 20:01:38.116596   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55: {Name:mkc0cd033bb8a494a4cf8a08dfd67f55b67932e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:01:38.116763   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55 ...
	I0930 20:01:38.116776   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55: {Name:mk85317566d0a2f89680d96c44f0e865cd88a3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:01:38.116847   26315 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 20:01:38.116983   26315 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 20:01:38.117102   26315 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 20:01:38.117117   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 20:01:38.117131   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 20:01:38.117145   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 20:01:38.117158   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 20:01:38.117175   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 20:01:38.117187   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 20:01:38.117198   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 20:01:38.131699   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 20:01:38.131811   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:01:38.131856   26315 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:01:38.131870   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:01:38.131902   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:01:38.131926   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:01:38.131956   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:01:38.132010   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:01:38.132045   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.132066   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.132084   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.132129   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:01:38.135411   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:38.135848   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:01:38.135875   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:38.136103   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:01:38.136307   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:01:38.136477   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:01:38.136602   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:01:38.215899   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 20:01:38.221340   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 20:01:38.232045   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 20:01:38.236011   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 20:01:38.247009   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 20:01:38.250999   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 20:01:38.261524   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 20:01:38.265766   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0930 20:01:38.275973   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 20:01:38.279940   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 20:01:38.289617   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 20:01:38.293330   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0930 20:01:38.303037   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:01:38.328067   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:01:38.353124   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:01:38.377109   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:01:38.402737   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 20:01:38.432128   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 20:01:38.459728   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:01:38.484047   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 20:01:38.508033   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:01:38.530855   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:01:38.554688   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:01:38.579730   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 20:01:38.595907   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 20:01:38.611657   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 20:01:38.627976   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0930 20:01:38.644290   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 20:01:38.662490   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0930 20:01:38.678795   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 20:01:38.694165   26315 ssh_runner.go:195] Run: openssl version
	I0930 20:01:38.699696   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:01:38.709850   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.714078   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.714128   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.719944   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:01:38.730979   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:01:38.741564   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.746132   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.746193   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.751872   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:01:38.763738   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:01:38.775831   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.780819   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.780877   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.786554   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:01:38.797347   26315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:01:38.801341   26315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 20:01:38.801400   26315 kubeadm.go:934] updating node {m03 192.168.39.227 8443 v1.31.1 crio true true} ...
	I0930 20:01:38.801503   26315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:01:38.801529   26315 kube-vip.go:115] generating kube-vip config ...
	I0930 20:01:38.801578   26315 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 20:01:38.819903   26315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 20:01:38.819976   26315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 20:01:38.820036   26315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:01:38.830324   26315 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 20:01:38.830375   26315 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 20:01:38.842272   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0930 20:01:38.842334   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:01:38.842272   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 20:01:38.842272   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0930 20:01:38.842419   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:01:38.842439   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:01:38.842489   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:01:38.842540   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:01:38.861520   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 20:01:38.861559   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 20:01:38.861581   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:01:38.861631   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 20:01:38.861657   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 20:01:38.861689   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:01:38.875651   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 20:01:38.875695   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 20:01:39.808722   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 20:01:39.819615   26315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 20:01:39.836414   26315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:01:39.853331   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 20:01:39.869585   26315 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 20:01:39.873243   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:01:39.884957   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:01:40.006850   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:01:40.022775   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:01:40.023225   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:40.023284   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:40.040829   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0930 20:01:40.041301   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:40.041861   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:40.041890   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:40.042247   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:40.042469   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:01:40.042649   26315 start.go:317] joinCluster: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:01:40.042812   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 20:01:40.042834   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:01:40.046258   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:40.046800   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:01:40.046821   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:40.047017   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:01:40.047286   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:01:40.047660   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:01:40.047833   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:01:40.209323   26315 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:01:40.209377   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1eegwc.d3x1pf4onbzzskk3 --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m03 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443"
	I0930 20:02:03.693864   26315 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1eegwc.d3x1pf4onbzzskk3 --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m03 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443": (23.484455167s)
	I0930 20:02:03.693901   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 20:02:04.227863   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-805293-m03 minikube.k8s.io/updated_at=2024_09_30T20_02_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=ha-805293 minikube.k8s.io/primary=false
	I0930 20:02:04.356839   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-805293-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 20:02:04.460804   26315 start.go:319] duration metric: took 24.418151981s to joinCluster
	I0930 20:02:04.460890   26315 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:02:04.461213   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:02:04.462900   26315 out.go:177] * Verifying Kubernetes components...
	I0930 20:02:04.464457   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:02:04.710029   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:02:04.776170   26315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:02:04.776405   26315 kapi.go:59] client config for ha-805293: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 20:02:04.776460   26315 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.3:8443
	I0930 20:02:04.776741   26315 node_ready.go:35] waiting up to 6m0s for node "ha-805293-m03" to be "Ready" ...
	I0930 20:02:04.776826   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:04.776836   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:04.776843   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:04.776849   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:04.780756   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:05.277289   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:05.277316   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:05.277328   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:05.277336   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:05.280839   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:05.777768   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:05.777793   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:05.777802   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:05.777810   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:05.781540   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:06.277679   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:06.277703   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:06.277713   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:06.277719   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:06.281145   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:06.777911   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:06.777937   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:06.777949   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:06.777955   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:06.781669   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:06.782486   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:07.277405   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:07.277428   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:07.277435   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:07.277438   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:07.281074   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:07.776952   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:07.776984   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:07.777005   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:07.777010   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:07.780689   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:08.277555   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:08.277576   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:08.277583   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:08.277587   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:08.283539   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:02:08.777360   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:08.777381   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:08.777390   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:08.777394   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:08.780937   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:09.277721   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:09.277758   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:09.277768   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:09.277772   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:09.285233   26315 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 20:02:09.285662   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:09.776955   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:09.776977   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:09.776987   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:09.776992   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:09.781593   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:10.277015   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:10.277033   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:10.277045   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:10.277049   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:10.281851   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:10.777471   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:10.777502   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:10.777513   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:10.777518   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:10.780948   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:11.277959   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:11.277977   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:11.277985   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:11.277989   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:11.401106   26315 round_trippers.go:574] Response Status: 200 OK in 123 milliseconds
	I0930 20:02:11.401822   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:11.777418   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:11.777439   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:11.777447   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:11.777451   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:11.780577   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:12.277563   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:12.277586   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:12.277594   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:12.277600   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:12.280508   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:12.777614   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:12.777635   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:12.777644   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:12.777649   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:12.780589   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:13.277609   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:13.277647   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:13.277658   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:13.277664   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:13.280727   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:13.777657   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:13.777684   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:13.777692   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:13.777699   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:13.781417   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:13.781894   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:14.277640   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:14.277665   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:14.277674   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:14.277678   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:14.281731   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:14.777599   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:14.777622   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:14.777633   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:14.777638   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:14.780768   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:15.277270   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:15.277293   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:15.277302   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:15.277308   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:15.281504   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:15.777339   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:15.777363   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:15.777374   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:15.777380   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:15.780737   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:16.277475   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:16.277500   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:16.277508   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:16.277513   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:16.281323   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:16.281879   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:16.777003   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:16.777026   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:16.777033   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:16.777038   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:16.780794   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:17.277324   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:17.277345   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:17.277353   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:17.277362   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:17.281320   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:17.777286   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:17.777313   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:17.777323   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:17.777329   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:17.781420   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:18.277338   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:18.277361   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:18.277369   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:18.277374   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:18.280798   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:18.777933   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:18.777955   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:18.777963   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:18.777967   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:18.781895   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:18.782295   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:19.277039   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:19.277062   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:19.277070   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:19.277074   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:19.280872   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:19.776906   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:19.776931   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:19.776941   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:19.776945   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:19.789070   26315 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0930 20:02:20.277619   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:20.277645   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:20.277657   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:20.277664   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:20.281050   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:20.777108   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:20.777132   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:20.777140   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:20.777145   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:20.780896   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:21.277715   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:21.277737   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:21.277746   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:21.277750   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:21.281198   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:21.281766   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:21.777774   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:21.777798   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:21.777812   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:21.777818   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:21.781858   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:22.277699   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:22.277726   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.277737   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.277741   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.281520   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.777562   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:22.777588   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.777599   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.777606   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.781172   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.781900   26315 node_ready.go:49] node "ha-805293-m03" has status "Ready":"True"
	I0930 20:02:22.781919   26315 node_ready.go:38] duration metric: took 18.00516261s for node "ha-805293-m03" to be "Ready" ...
	I0930 20:02:22.781930   26315 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:02:22.782018   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:22.782034   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.782045   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.782050   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.788078   26315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 20:02:22.794707   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.794792   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-x7zjp
	I0930 20:02:22.794802   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.794843   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.794851   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.798283   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.799034   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:22.799049   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.799059   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.799063   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.802512   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.803017   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.803034   26315 pod_ready.go:82] duration metric: took 8.303758ms for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.803043   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.803100   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-z4bkv
	I0930 20:02:22.803108   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.803115   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.803120   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.805708   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.806288   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:22.806303   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.806309   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.806314   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.808794   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.809193   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.809210   26315 pod_ready.go:82] duration metric: took 6.159698ms for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.809221   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.809280   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293
	I0930 20:02:22.809291   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.809302   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.809310   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.811844   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.812420   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:22.812435   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.812441   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.812443   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.814572   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.815425   26315 pod_ready.go:93] pod "etcd-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.815446   26315 pod_ready.go:82] duration metric: took 6.21739ms for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.815467   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.815571   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m02
	I0930 20:02:22.815579   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.815589   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.815596   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.819297   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.820054   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:22.820071   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.820078   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.820082   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.822946   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.823362   26315 pod_ready.go:93] pod "etcd-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.823377   26315 pod_ready.go:82] duration metric: took 7.903457ms for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.823386   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.977860   26315 request.go:632] Waited for 154.412889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m03
	I0930 20:02:22.977929   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m03
	I0930 20:02:22.977936   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.977947   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.977956   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.981875   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.177702   26315 request.go:632] Waited for 195.197886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:23.177761   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:23.177766   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.177774   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.177779   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.180898   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.181332   26315 pod_ready.go:93] pod "etcd-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:23.181350   26315 pod_ready.go:82] duration metric: took 357.955948ms for pod "etcd-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.181366   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.377609   26315 request.go:632] Waited for 196.161944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:02:23.377673   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:02:23.377681   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.377691   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.377697   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.381213   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.578424   26315 request.go:632] Waited for 196.368077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:23.578500   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:23.578506   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.578514   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.578528   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.581799   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.582390   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:23.582406   26315 pod_ready.go:82] duration metric: took 401.034594ms for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.582416   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.778543   26315 request.go:632] Waited for 196.052617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:02:23.778624   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:02:23.778633   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.778643   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.778653   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.781828   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.977855   26315 request.go:632] Waited for 195.382083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:23.977924   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:23.977944   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.977959   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.977965   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.981372   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.982066   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:23.982087   26315 pod_ready.go:82] duration metric: took 399.664005ms for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.982100   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.178123   26315 request.go:632] Waited for 195.960731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m03
	I0930 20:02:24.178196   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m03
	I0930 20:02:24.178203   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.178211   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.178236   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.182112   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:24.378558   26315 request.go:632] Waited for 195.433009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:24.378638   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:24.378643   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.378650   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.378656   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.382291   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:24.382917   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:24.382938   26315 pod_ready.go:82] duration metric: took 400.829354ms for pod "kube-apiserver-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.382948   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.577887   26315 request.go:632] Waited for 194.863294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:02:24.577956   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:02:24.577963   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.577971   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.577978   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.581564   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:24.778150   26315 request.go:632] Waited for 195.36459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:24.778203   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:24.778208   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.778216   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.778221   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.781210   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:24.781808   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:24.781826   26315 pod_ready.go:82] duration metric: took 398.871488ms for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.781839   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.977967   26315 request.go:632] Waited for 196.028192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:02:24.978039   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:02:24.978046   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.978055   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.978062   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.981635   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.177628   26315 request.go:632] Waited for 195.118197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:25.177702   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:25.177707   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.177715   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.177722   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.184032   26315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 20:02:25.185117   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:25.185151   26315 pod_ready.go:82] duration metric: took 403.303748ms for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.185168   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.378088   26315 request.go:632] Waited for 192.829504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m03
	I0930 20:02:25.378247   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m03
	I0930 20:02:25.378262   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.378274   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.378284   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.382197   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.578183   26315 request.go:632] Waited for 195.374549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:25.578237   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:25.578241   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.578249   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.578273   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.581302   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.581967   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:25.581990   26315 pod_ready.go:82] duration metric: took 396.812632ms for pod "kube-controller-manager-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.582004   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.778066   26315 request.go:632] Waited for 195.961131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:02:25.778120   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:02:25.778125   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.778132   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.778136   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.781487   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.977671   26315 request.go:632] Waited for 195.30691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:25.977755   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:25.977762   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.977769   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.977775   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.981674   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.982338   26315 pod_ready.go:93] pod "kube-proxy-6gnt4" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:25.982360   26315 pod_ready.go:82] duration metric: took 400.349266ms for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.982370   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b9cpp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.178400   26315 request.go:632] Waited for 195.958284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b9cpp
	I0930 20:02:26.178455   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b9cpp
	I0930 20:02:26.178460   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.178468   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.178474   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.181740   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.377643   26315 request.go:632] Waited for 195.301602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:26.377715   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:26.377720   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.377730   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.377736   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.381534   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.382336   26315 pod_ready.go:93] pod "kube-proxy-b9cpp" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:26.382356   26315 pod_ready.go:82] duration metric: took 399.97947ms for pod "kube-proxy-b9cpp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.382369   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.578135   26315 request.go:632] Waited for 195.696435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:02:26.578222   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:02:26.578231   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.578239   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.578246   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.581969   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.778092   26315 request.go:632] Waited for 195.270119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:26.778175   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:26.778183   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.778194   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.778204   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.781951   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.782497   26315 pod_ready.go:93] pod "kube-proxy-vptrg" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:26.782530   26315 pod_ready.go:82] duration metric: took 400.140578ms for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.782542   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.978290   26315 request.go:632] Waited for 195.637761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:02:26.978361   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:02:26.978368   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.978377   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.978381   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.982459   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:27.178413   26315 request.go:632] Waited for 195.235139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:27.178464   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:27.178469   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.178476   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.178479   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.182089   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.182674   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:27.182695   26315 pod_ready.go:82] duration metric: took 400.147259ms for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.182706   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.377673   26315 request.go:632] Waited for 194.89364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:02:27.377752   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:02:27.377758   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.377765   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.377769   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.381356   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.578554   26315 request.go:632] Waited for 196.443432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:27.578622   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:27.578630   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.578641   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.578647   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.582325   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.582942   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:27.582965   26315 pod_ready.go:82] duration metric: took 400.251961ms for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.582978   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.778055   26315 request.go:632] Waited for 195.008545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m03
	I0930 20:02:27.778129   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m03
	I0930 20:02:27.778135   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.778142   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.778147   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.782023   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.977660   26315 request.go:632] Waited for 194.950522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:27.977742   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:27.977752   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.977762   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.977769   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.981329   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.981878   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:27.981905   26315 pod_ready.go:82] duration metric: took 398.919132ms for pod "kube-scheduler-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.981920   26315 pod_ready.go:39] duration metric: took 5.199971217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:02:27.981939   26315 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:02:27.982009   26315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:02:27.999589   26315 api_server.go:72] duration metric: took 23.538667198s to wait for apiserver process to appear ...
	I0930 20:02:27.999616   26315 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:02:27.999635   26315 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0930 20:02:28.006690   26315 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I0930 20:02:28.006768   26315 round_trippers.go:463] GET https://192.168.39.3:8443/version
	I0930 20:02:28.006788   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.006799   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.006804   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.008072   26315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0930 20:02:28.008144   26315 api_server.go:141] control plane version: v1.31.1
	I0930 20:02:28.008163   26315 api_server.go:131] duration metric: took 8.540356ms to wait for apiserver health ...
	I0930 20:02:28.008173   26315 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:02:28.178582   26315 request.go:632] Waited for 170.336703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.178653   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.178673   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.178683   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.178688   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.186196   26315 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 20:02:28.192615   26315 system_pods.go:59] 24 kube-system pods found
	I0930 20:02:28.192646   26315 system_pods.go:61] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:02:28.192651   26315 system_pods.go:61] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:02:28.192656   26315 system_pods.go:61] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:02:28.192661   26315 system_pods.go:61] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:02:28.192665   26315 system_pods.go:61] "etcd-ha-805293-m03" [c87078d8-ee99-4a5f-9258-cf5d7e658388] Running
	I0930 20:02:28.192668   26315 system_pods.go:61] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:02:28.192671   26315 system_pods.go:61] "kindnet-qrhb8" [852c4080-9210-47bb-a06a-d1b8bcff580d] Running
	I0930 20:02:28.192675   26315 system_pods.go:61] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:02:28.192679   26315 system_pods.go:61] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:02:28.192682   26315 system_pods.go:61] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:02:28.192687   26315 system_pods.go:61] "kube-apiserver-ha-805293-m03" [6fb5a285-7f35-4eb2-b028-6bd9fcfd21fe] Running
	I0930 20:02:28.192691   26315 system_pods.go:61] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:02:28.192695   26315 system_pods.go:61] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:02:28.192698   26315 system_pods.go:61] "kube-controller-manager-ha-805293-m03" [35d67e4a-f434-49df-8fb9-c6fcc725d8ff] Running
	I0930 20:02:28.192702   26315 system_pods.go:61] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:02:28.192706   26315 system_pods.go:61] "kube-proxy-b9cpp" [c828ff6a-6cbb-4a29-84bc-118522687da8] Running
	I0930 20:02:28.192710   26315 system_pods.go:61] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:02:28.192714   26315 system_pods.go:61] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:02:28.192720   26315 system_pods.go:61] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:02:28.192723   26315 system_pods.go:61] "kube-scheduler-ha-805293-m03" [34e2edf8-ca25-4a7c-a626-ac037b40b905] Running
	I0930 20:02:28.192729   26315 system_pods.go:61] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:02:28.192732   26315 system_pods.go:61] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:02:28.192735   26315 system_pods.go:61] "kube-vip-ha-805293-m03" [fcc5a165-5430-45d3-8ec7-fbdf5adc7e20] Running
	I0930 20:02:28.192738   26315 system_pods.go:61] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:02:28.192747   26315 system_pods.go:74] duration metric: took 184.564973ms to wait for pod list to return data ...
	I0930 20:02:28.192756   26315 default_sa.go:34] waiting for default service account to be created ...
	I0930 20:02:28.378324   26315 request.go:632] Waited for 185.488908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:02:28.378382   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:02:28.378387   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.378394   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.378398   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.382352   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:28.382515   26315 default_sa.go:45] found service account: "default"
	I0930 20:02:28.382532   26315 default_sa.go:55] duration metric: took 189.767008ms for default service account to be created ...
	I0930 20:02:28.382546   26315 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 20:02:28.578010   26315 request.go:632] Waited for 195.370903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.578070   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.578076   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.578083   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.578087   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.584177   26315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 20:02:28.592272   26315 system_pods.go:86] 24 kube-system pods found
	I0930 20:02:28.592310   26315 system_pods.go:89] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:02:28.592319   26315 system_pods.go:89] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:02:28.592330   26315 system_pods.go:89] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:02:28.592336   26315 system_pods.go:89] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:02:28.592341   26315 system_pods.go:89] "etcd-ha-805293-m03" [c87078d8-ee99-4a5f-9258-cf5d7e658388] Running
	I0930 20:02:28.592346   26315 system_pods.go:89] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:02:28.592351   26315 system_pods.go:89] "kindnet-qrhb8" [852c4080-9210-47bb-a06a-d1b8bcff580d] Running
	I0930 20:02:28.592357   26315 system_pods.go:89] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:02:28.592363   26315 system_pods.go:89] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:02:28.592368   26315 system_pods.go:89] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:02:28.592374   26315 system_pods.go:89] "kube-apiserver-ha-805293-m03" [6fb5a285-7f35-4eb2-b028-6bd9fcfd21fe] Running
	I0930 20:02:28.592381   26315 system_pods.go:89] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:02:28.592388   26315 system_pods.go:89] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:02:28.592397   26315 system_pods.go:89] "kube-controller-manager-ha-805293-m03" [35d67e4a-f434-49df-8fb9-c6fcc725d8ff] Running
	I0930 20:02:28.592404   26315 system_pods.go:89] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:02:28.592410   26315 system_pods.go:89] "kube-proxy-b9cpp" [c828ff6a-6cbb-4a29-84bc-118522687da8] Running
	I0930 20:02:28.592416   26315 system_pods.go:89] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:02:28.592422   26315 system_pods.go:89] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:02:28.592430   26315 system_pods.go:89] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:02:28.592436   26315 system_pods.go:89] "kube-scheduler-ha-805293-m03" [34e2edf8-ca25-4a7c-a626-ac037b40b905] Running
	I0930 20:02:28.592442   26315 system_pods.go:89] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:02:28.592450   26315 system_pods.go:89] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:02:28.592455   26315 system_pods.go:89] "kube-vip-ha-805293-m03" [fcc5a165-5430-45d3-8ec7-fbdf5adc7e20] Running
	I0930 20:02:28.592461   26315 system_pods.go:89] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:02:28.592472   26315 system_pods.go:126] duration metric: took 209.917591ms to wait for k8s-apps to be running ...
	I0930 20:02:28.592485   26315 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 20:02:28.592534   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:02:28.608637   26315 system_svc.go:56] duration metric: took 16.145321ms WaitForService to wait for kubelet
	I0930 20:02:28.608674   26315 kubeadm.go:582] duration metric: took 24.147753749s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:02:28.608696   26315 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:02:28.778132   26315 request.go:632] Waited for 169.34168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes
	I0930 20:02:28.778186   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I0930 20:02:28.778191   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.778198   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.778202   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.782435   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:28.783582   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:02:28.783605   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:02:28.783617   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:02:28.783621   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:02:28.783625   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:02:28.783628   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:02:28.783633   26315 node_conditions.go:105] duration metric: took 174.931399ms to run NodePressure ...
	I0930 20:02:28.783649   26315 start.go:241] waiting for startup goroutines ...
	I0930 20:02:28.783678   26315 start.go:255] writing updated cluster config ...
	I0930 20:02:28.783989   26315 ssh_runner.go:195] Run: rm -f paused
	I0930 20:02:28.838018   26315 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 20:02:28.840509   26315 out.go:177] * Done! kubectl is now configured to use "ha-805293" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.719866004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726768719842301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41bb9d4e-5de5-411a-a7b6-a916363e5dcb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.720478424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dac88b91-17e1-43b1-9506-d88f65982417 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.720574934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dac88b91-17e1-43b1-9506-d88f65982417 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.720803898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dac88b91-17e1-43b1-9506-d88f65982417 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.764079081Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53069822-6c65-4f13-8630-8bf09858022c name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.764163192Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53069822-6c65-4f13-8630-8bf09858022c name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.766026422Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1824dde-30dd-47e4-ada2-55354384d232 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.766511252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726768766488086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1824dde-30dd-47e4-ada2-55354384d232 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.767195009Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2e8d035-6635-4b50-82c0-46c856c09bc7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.767260429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2e8d035-6635-4b50-82c0-46c856c09bc7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.767514588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2e8d035-6635-4b50-82c0-46c856c09bc7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.803578789Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2606b119-b191-43b4-9c91-5f350cc8bc25 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.803653347Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2606b119-b191-43b4-9c91-5f350cc8bc25 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.804798337Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64592d3c-689c-4251-8643-7ad555f9c640 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.805193698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726768805173022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64592d3c-689c-4251-8643-7ad555f9c640 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.805784123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f40abbc-6da9-4cb1-a771-9d7d9355e929 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.805868229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f40abbc-6da9-4cb1-a771-9d7d9355e929 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.806081369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f40abbc-6da9-4cb1-a771-9d7d9355e929 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.850386313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=725689f7-fe30-436e-b3d1-11682c9cf2fa name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.850458322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=725689f7-fe30-436e-b3d1-11682c9cf2fa name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.851416547Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0864e725-6683-441d-a6ca-fc67088bfa22 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.851811304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726768851791539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0864e725-6683-441d-a6ca-fc67088bfa22 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.852561279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07a80c57-c8fb-46c7-8550-80eb6ac4b8b4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.852712859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07a80c57-c8fb-46c7-8550-80eb6ac4b8b4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:08 ha-805293 crio[655]: time="2024-09-30 20:06:08.853171896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07a80c57-c8fb-46c7-8550-80eb6ac4b8b4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10ee59c77c769       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   a8d4349f6e0b0       busybox-7dff88458-r27jf
	8c540e4668f99       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   f95d30afc0491       coredns-7c65d6cfc9-x7zjp
	6d01ed71d852e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   2a39bd6449f5a       storage-provisioner
	beba42a2bf035       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   626fdaeb1b142       coredns-7c65d6cfc9-z4bkv
	e28b6781ed449       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   36a3293339cae       kindnet-slhtm
	cd73b6dc43348       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   27a0913ae182a       kube-proxy-6gnt4
	5e8e1f537ce94       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   1fd2dbf5f5af0       kube-vip-ha-805293
	0e9fbbe2017da       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   6fc84ff2f4f9e       kube-controller-manager-ha-805293
	9b8d5baa6998a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   73733467afdd9       kube-scheduler-ha-805293
	219dff1c43cd4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   bff718c807eb7       etcd-ha-805293
	994c927aa147a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   ec25e9867db7c       kube-apiserver-ha-805293
	
	
	==> coredns [8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b] <==
	[INFO] 10.244.0.4:54656 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002122445s
	[INFO] 10.244.1.2:43325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000298961s
	[INFO] 10.244.1.2:50368 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261008s
	[INFO] 10.244.1.2:34858 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000270623s
	[INFO] 10.244.1.2:59975 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192447s
	[INFO] 10.244.2.2:37486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233576s
	[INFO] 10.244.2.2:40647 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002177996s
	[INFO] 10.244.2.2:39989 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196915s
	[INFO] 10.244.2.2:42105 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001612348s
	[INFO] 10.244.2.2:42498 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180331s
	[INFO] 10.244.2.2:34873 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000262642s
	[INFO] 10.244.0.4:55282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002337707s
	[INFO] 10.244.0.4:52721 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082276s
	[INFO] 10.244.0.4:33773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001975703s
	[INFO] 10.244.0.4:44087 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095899s
	[INFO] 10.244.1.2:44456 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189431s
	[INFO] 10.244.1.2:52532 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112979s
	[INFO] 10.244.1.2:39707 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095712s
	[INFO] 10.244.2.2:42900 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101241s
	[INFO] 10.244.0.4:56608 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134276s
	[INFO] 10.244.1.2:35939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00031266s
	[INFO] 10.244.1.2:48131 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196792s
	[INFO] 10.244.2.2:40732 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154649s
	[INFO] 10.244.0.4:51180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000206094s
	[INFO] 10.244.0.4:36921 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000118718s
	
	
	==> coredns [beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c] <==
	[INFO] 10.244.0.4:43879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219235s
	[INFO] 10.244.1.2:54557 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005324153s
	[INFO] 10.244.1.2:59221 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00021778s
	[INFO] 10.244.1.2:56069 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0044481s
	[INFO] 10.244.1.2:50386 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00023413s
	[INFO] 10.244.2.2:46506 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103313s
	[INFO] 10.244.2.2:41909 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177677s
	[INFO] 10.244.0.4:57981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180642s
	[INFO] 10.244.0.4:42071 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100781s
	[INFO] 10.244.0.4:53066 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079995s
	[INFO] 10.244.0.4:54192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095317s
	[INFO] 10.244.1.2:42705 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147435s
	[INFO] 10.244.2.2:42448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014108s
	[INFO] 10.244.2.2:58687 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152745s
	[INFO] 10.244.2.2:59433 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159734s
	[INFO] 10.244.0.4:34822 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086009s
	[INFO] 10.244.0.4:46188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067594s
	[INFO] 10.244.0.4:33829 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130532s
	[INFO] 10.244.1.2:56575 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000557946s
	[INFO] 10.244.1.2:41726 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145733s
	[INFO] 10.244.2.2:56116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108892s
	[INFO] 10.244.2.2:58958 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075413s
	[INFO] 10.244.2.2:42001 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077659s
	[INFO] 10.244.0.4:53905 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091303s
	[INFO] 10.244.0.4:41906 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000098967s
	
	
	==> describe nodes <==
	Name:               ha-805293
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T19_59_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 19:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:06:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 20:00:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-805293
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 866f17ca2f8945bb8c8d7336ea64bab7
	  System UUID:                866f17ca-2f89-45bb-8c8d-7336ea64bab7
	  Boot ID:                    688ba3e5-bec7-403a-8a14-d517107abdf5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-r27jf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 coredns-7c65d6cfc9-x7zjp             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m9s
	  kube-system                 coredns-7c65d6cfc9-z4bkv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m9s
	  kube-system                 etcd-ha-805293                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m13s
	  kube-system                 kindnet-slhtm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m9s
	  kube-system                 kube-apiserver-ha-805293             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-controller-manager-ha-805293    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-proxy-6gnt4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-scheduler-ha-805293             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-vip-ha-805293                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m6s   kube-proxy       
	  Normal  Starting                 6m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m13s  kubelet          Node ha-805293 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m13s  kubelet          Node ha-805293 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m13s  kubelet          Node ha-805293 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m9s   node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal  NodeReady                5m56s  kubelet          Node ha-805293 status is now: NodeReady
	  Normal  RegisteredNode           5m13s  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal  RegisteredNode           3m59s  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	
	
	Name:               ha-805293-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_00_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:00:48 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:03:41 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-805293-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d0700264de549a1be3f1020308847ab
	  System UUID:                4d070026-4de5-49a1-be3f-1020308847ab
	  Boot ID:                    6a7fa1c9-5f0b-4080-a967-4e6a9eb2c122
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lshpm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-805293-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-lfldt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-805293-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-805293-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-proxy-vptrg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-805293-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-vip-ha-805293-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m16s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m22s)  kubelet          Node ha-805293-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m22s)  kubelet          Node ha-805293-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m22s)  kubelet          Node ha-805293-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  NodeNotReady             104s                   node-controller  Node ha-805293-m02 status is now: NodeNotReady
	
	
	Name:               ha-805293-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_02_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:02:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:06:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-805293-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d290a9661d284f5abbb0966111b1ff62
	  System UUID:                d290a966-1d28-4f5a-bbb0-966111b1ff62
	  Boot ID:                    4480564e-4012-421d-8e2a-ef45c5701e0e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nfncv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-805293-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m6s
	  kube-system                 kindnet-qrhb8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m8s
	  kube-system                 kube-apiserver-ha-805293-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-controller-manager-ha-805293-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-proxy-b9cpp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-ha-805293-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-vip-ha-805293-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node ha-805293-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node ha-805293-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node ha-805293-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	
	
	Name:               ha-805293-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_03_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:03:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:06:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-805293-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66e464978dbd400d9e13327c67f50978
	  System UUID:                66e46497-8dbd-400d-9e13-327c67f50978
	  Boot ID:                    e58b57f2-9a1b-47d7-b35d-6de7e20bd5ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pk4z9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-7hn94    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m3s)  kubelet          Node ha-805293-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m3s)  kubelet          Node ha-805293-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m3s)  kubelet          Node ha-805293-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-805293-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep30 19:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051498] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038050] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.756373] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.910183] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.882465] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.789974] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.062566] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063093] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.202518] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.124623] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.268552] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +3.977529] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.564932] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.062130] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.342874] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.088317] kauditd_printk_skb: 79 callbacks suppressed
	[Sep30 20:00] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.197664] kauditd_printk_skb: 38 callbacks suppressed
	[ +40.392588] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c] <==
	{"level":"warn","ts":"2024-09-30T20:06:09.042084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.112936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.123107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.126947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.135485Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.143431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.144226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.151157Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.154936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.158464Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.168673Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.175248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.181456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.185674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.189191Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.194942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.200807Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.206475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.213044Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.216618Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.220540Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.226489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.233467Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.242859Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:09.300897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:06:09 up 6 min,  0 users,  load average: 0.18, 0.24, 0.12
	Linux ha-805293 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa] <==
	I0930 20:05:33.352605       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:05:43.361332       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:05:43.361485       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:05:43.361629       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:05:43.361653       1 main.go:299] handling current node
	I0930 20:05:43.361675       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:05:43.361691       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:05:43.361783       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:05:43.361802       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:05:53.361412       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:05:53.361456       1 main.go:299] handling current node
	I0930 20:05:53.361477       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:05:53.361484       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:05:53.361668       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:05:53.361697       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:05:53.361813       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:05:53.361841       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:06:03.353152       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:06:03.353232       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:06:03.353604       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:06:03.353656       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:06:03.353788       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:06:03.353817       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:06:03.353915       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:06:03.353945       1 main.go:299] handling current node
	
	
	==> kube-apiserver [994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78] <==
	I0930 19:59:55.232483       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0930 19:59:55.241927       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.3]
	I0930 19:59:55.242751       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 19:59:55.248161       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 19:59:56.585015       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 19:59:56.606454       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0930 19:59:56.717747       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 20:00:00.619178       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0930 20:00:00.866886       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0930 20:02:35.103260       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54756: use of closed network connection
	E0930 20:02:35.310204       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54774: use of closed network connection
	E0930 20:02:35.528451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54798: use of closed network connection
	E0930 20:02:35.718056       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54824: use of closed network connection
	E0930 20:02:35.905602       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54834: use of closed network connection
	E0930 20:02:36.095718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54846: use of closed network connection
	E0930 20:02:36.292842       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54870: use of closed network connection
	E0930 20:02:36.507445       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54880: use of closed network connection
	E0930 20:02:36.711017       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54890: use of closed network connection
	E0930 20:02:37.027891       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54906: use of closed network connection
	E0930 20:02:37.211934       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54928: use of closed network connection
	E0930 20:02:37.400557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54946: use of closed network connection
	E0930 20:02:37.592034       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54964: use of closed network connection
	E0930 20:02:37.769244       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54968: use of closed network connection
	E0930 20:02:37.945689       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54986: use of closed network connection
	W0930 20:04:05.250494       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.227 192.168.39.3]
	
	
	==> kube-controller-manager [0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963] <==
	I0930 20:03:07.394951       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-805293-m04" podCIDRs=["10.244.3.0/24"]
	I0930 20:03:07.395481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:07.396749       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:07.436135       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:07.684943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:08.073414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:10.185795       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-805293-m04"
	I0930 20:03:10.251142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:10.326069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:10.383451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:11.395780       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:11.488119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:17.639978       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:28.022240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:28.023330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-805293-m04"
	I0930 20:03:28.045054       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:30.206023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:37.957274       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:04:25.230773       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-805293-m04"
	I0930 20:04:25.230955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	I0930 20:04:25.255656       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	I0930 20:04:25.398159       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	I0930 20:04:25.408524       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.658854ms"
	I0930 20:04:25.408627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.436µs"
	I0930 20:04:30.476044       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	
	
	==> kube-proxy [cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:00:02.260002       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 20:00:02.292313       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.3"]
	E0930 20:00:02.293761       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:00:02.331058       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:00:02.331111       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:00:02.331136       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:00:02.334264       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:00:02.334706       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:00:02.334732       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:00:02.338075       1 config.go:199] "Starting service config controller"
	I0930 20:00:02.338115       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:00:02.338141       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:00:02.338146       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:00:02.340129       1 config.go:328] "Starting node config controller"
	I0930 20:00:02.340159       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:00:02.438958       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 20:00:02.439119       1 shared_informer.go:320] Caches are synced for service config
	I0930 20:00:02.440633       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463] <==
	W0930 19:59:54.471920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 19:59:54.472044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.522920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.524738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.525008       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 19:59:54.525097       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 19:59:54.570077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.570416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.573175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.573222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.611352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 19:59:54.611460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.614509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 19:59:54.614660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.659257       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 19:59:54.659351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.769876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.770087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 19:59:56.900381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 20:02:01.539050       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-h6pvg\": pod kube-proxy-h6pvg is already assigned to node \"ha-805293-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-h6pvg" node="ha-805293-m03"
	E0930 20:02:01.539424       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9860392c-eca6-4200-9b6e-f0a6f51b523b(kube-system/kube-proxy-h6pvg) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-h6pvg"
	E0930 20:02:01.539482       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-h6pvg\": pod kube-proxy-h6pvg is already assigned to node \"ha-805293-m03\"" pod="kube-system/kube-proxy-h6pvg"
	I0930 20:02:01.539558       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-h6pvg" node="ha-805293-m03"
	E0930 20:02:29.833811       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lshpm\": pod busybox-7dff88458-lshpm is already assigned to node \"ha-805293-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lshpm" node="ha-805293-m02"
	E0930 20:02:29.833910       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lshpm\": pod busybox-7dff88458-lshpm is already assigned to node \"ha-805293-m02\"" pod="default/busybox-7dff88458-lshpm"
	
	
	==> kubelet <==
	Sep 30 20:04:56 ha-805293 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 20:04:56 ha-805293 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 20:04:56 ha-805293 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 20:04:56 ha-805293 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 20:04:56 ha-805293 kubelet[1307]: E0930 20:04:56.831137    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726696830908263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:04:56 ha-805293 kubelet[1307]: E0930 20:04:56.831174    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726696830908263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:06 ha-805293 kubelet[1307]: E0930 20:05:06.833436    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726706832581949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:06 ha-805293 kubelet[1307]: E0930 20:05:06.834135    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726706832581949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:16 ha-805293 kubelet[1307]: E0930 20:05:16.840697    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726716835840638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:16 ha-805293 kubelet[1307]: E0930 20:05:16.841087    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726716835840638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:26 ha-805293 kubelet[1307]: E0930 20:05:26.843795    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726726842473695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:26 ha-805293 kubelet[1307]: E0930 20:05:26.843820    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726726842473695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:36 ha-805293 kubelet[1307]: E0930 20:05:36.846940    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726736846123824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:36 ha-805293 kubelet[1307]: E0930 20:05:36.847349    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726736846123824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:46 ha-805293 kubelet[1307]: E0930 20:05:46.849818    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726746849247125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:46 ha-805293 kubelet[1307]: E0930 20:05:46.850141    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726746849247125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:56 ha-805293 kubelet[1307]: E0930 20:05:56.740673    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 20:05:56 ha-805293 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 20:05:56 ha-805293 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 20:05:56 ha-805293 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 20:05:56 ha-805293 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 20:05:56 ha-805293 kubelet[1307]: E0930 20:05:56.852143    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726756851671468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:56 ha-805293 kubelet[1307]: E0930 20:05:56.852175    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726756851671468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:06 ha-805293 kubelet[1307]: E0930 20:06:06.854020    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726766853679089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:06 ha-805293 kubelet[1307]: E0930 20:06:06.854344    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726766853679089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-805293 -n ha-805293
helpers_test.go:261: (dbg) Run:  kubectl --context ha-805293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0930 20:06:12.795710   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.381181659s)
ha_test.go:415: expected profile "ha-805293" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-805293\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-805293\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-805293\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.3\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.220\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.227\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.92\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"m
etallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":
262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-805293 -n ha-805293
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-805293 logs -n 25: (1.357421308s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3144947660/001/cp-test_ha-805293-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293:/home/docker/cp-test_ha-805293-m03_ha-805293.txt                       |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293 sudo cat                                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293.txt                                 |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m02:/home/docker/cp-test_ha-805293-m03_ha-805293-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m02 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04:/home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m04 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp testdata/cp-test.txt                                                | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3144947660/001/cp-test_ha-805293-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293:/home/docker/cp-test_ha-805293-m04_ha-805293.txt                       |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293 sudo cat                                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293.txt                                 |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m02:/home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m02 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03:/home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m03 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-805293 node stop m02 -v=7                                                     | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 19:59:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 19:59:16.465113   26315 out.go:345] Setting OutFile to fd 1 ...
	I0930 19:59:16.465408   26315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:59:16.465418   26315 out.go:358] Setting ErrFile to fd 2...
	I0930 19:59:16.465423   26315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:59:16.465672   26315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 19:59:16.466270   26315 out.go:352] Setting JSON to false
	I0930 19:59:16.467246   26315 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2499,"bootTime":1727723857,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 19:59:16.467349   26315 start.go:139] virtualization: kvm guest
	I0930 19:59:16.469778   26315 out.go:177] * [ha-805293] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 19:59:16.471083   26315 notify.go:220] Checking for updates...
	I0930 19:59:16.471129   26315 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 19:59:16.472574   26315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 19:59:16.474040   26315 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:59:16.475378   26315 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:59:16.476781   26315 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 19:59:16.478196   26315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 19:59:16.479555   26315 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 19:59:16.514287   26315 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 19:59:16.515592   26315 start.go:297] selected driver: kvm2
	I0930 19:59:16.515604   26315 start.go:901] validating driver "kvm2" against <nil>
	I0930 19:59:16.515615   26315 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 19:59:16.516299   26315 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:59:16.516372   26315 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 19:59:16.531012   26315 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 19:59:16.531063   26315 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 19:59:16.531292   26315 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 19:59:16.531318   26315 cni.go:84] Creating CNI manager for ""
	I0930 19:59:16.531357   26315 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0930 19:59:16.531370   26315 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 19:59:16.531430   26315 start.go:340] cluster config:
	{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0930 19:59:16.531545   26315 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:59:16.533673   26315 out.go:177] * Starting "ha-805293" primary control-plane node in "ha-805293" cluster
	I0930 19:59:16.534957   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:59:16.535009   26315 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 19:59:16.535023   26315 cache.go:56] Caching tarball of preloaded images
	I0930 19:59:16.535111   26315 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 19:59:16.535121   26315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 19:59:16.535489   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 19:59:16.535515   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json: {Name:mk695bb0575a50d6b6d53e3d2c18bb8666421806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:16.535704   26315 start.go:360] acquireMachinesLock for ha-805293: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 19:59:16.535734   26315 start.go:364] duration metric: took 15.84µs to acquireMachinesLock for "ha-805293"
	I0930 19:59:16.535751   26315 start.go:93] Provisioning new machine with config: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 19:59:16.535821   26315 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 19:59:16.537498   26315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 19:59:16.537633   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:59:16.537678   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:59:16.552377   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I0930 19:59:16.552824   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:59:16.553523   26315 main.go:141] libmachine: Using API Version  1
	I0930 19:59:16.553548   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:59:16.553949   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:59:16.554153   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:16.554354   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:16.554484   26315 start.go:159] libmachine.API.Create for "ha-805293" (driver="kvm2")
	I0930 19:59:16.554517   26315 client.go:168] LocalClient.Create starting
	I0930 19:59:16.554565   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 19:59:16.554602   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 19:59:16.554620   26315 main.go:141] libmachine: Parsing certificate...
	I0930 19:59:16.554688   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 19:59:16.554716   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 19:59:16.554736   26315 main.go:141] libmachine: Parsing certificate...
	I0930 19:59:16.554758   26315 main.go:141] libmachine: Running pre-create checks...
	I0930 19:59:16.554770   26315 main.go:141] libmachine: (ha-805293) Calling .PreCreateCheck
	I0930 19:59:16.555128   26315 main.go:141] libmachine: (ha-805293) Calling .GetConfigRaw
	I0930 19:59:16.555744   26315 main.go:141] libmachine: Creating machine...
	I0930 19:59:16.555765   26315 main.go:141] libmachine: (ha-805293) Calling .Create
	I0930 19:59:16.555931   26315 main.go:141] libmachine: (ha-805293) Creating KVM machine...
	I0930 19:59:16.557277   26315 main.go:141] libmachine: (ha-805293) DBG | found existing default KVM network
	I0930 19:59:16.557963   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:16.557842   26338 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I0930 19:59:16.558012   26315 main.go:141] libmachine: (ha-805293) DBG | created network xml: 
	I0930 19:59:16.558024   26315 main.go:141] libmachine: (ha-805293) DBG | <network>
	I0930 19:59:16.558032   26315 main.go:141] libmachine: (ha-805293) DBG |   <name>mk-ha-805293</name>
	I0930 19:59:16.558037   26315 main.go:141] libmachine: (ha-805293) DBG |   <dns enable='no'/>
	I0930 19:59:16.558041   26315 main.go:141] libmachine: (ha-805293) DBG |   
	I0930 19:59:16.558052   26315 main.go:141] libmachine: (ha-805293) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0930 19:59:16.558057   26315 main.go:141] libmachine: (ha-805293) DBG |     <dhcp>
	I0930 19:59:16.558063   26315 main.go:141] libmachine: (ha-805293) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0930 19:59:16.558073   26315 main.go:141] libmachine: (ha-805293) DBG |     </dhcp>
	I0930 19:59:16.558087   26315 main.go:141] libmachine: (ha-805293) DBG |   </ip>
	I0930 19:59:16.558111   26315 main.go:141] libmachine: (ha-805293) DBG |   
	I0930 19:59:16.558145   26315 main.go:141] libmachine: (ha-805293) DBG | </network>
	I0930 19:59:16.558156   26315 main.go:141] libmachine: (ha-805293) DBG | 
	I0930 19:59:16.563671   26315 main.go:141] libmachine: (ha-805293) DBG | trying to create private KVM network mk-ha-805293 192.168.39.0/24...
	I0930 19:59:16.628841   26315 main.go:141] libmachine: (ha-805293) DBG | private KVM network mk-ha-805293 192.168.39.0/24 created
	I0930 19:59:16.628870   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:16.628827   26338 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:59:16.628892   26315 main.go:141] libmachine: (ha-805293) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293 ...
	I0930 19:59:16.628909   26315 main.go:141] libmachine: (ha-805293) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 19:59:16.629064   26315 main.go:141] libmachine: (ha-805293) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 19:59:16.879937   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:16.879799   26338 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa...
	I0930 19:59:17.039302   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:17.039101   26338 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/ha-805293.rawdisk...
	I0930 19:59:17.039341   26315 main.go:141] libmachine: (ha-805293) DBG | Writing magic tar header
	I0930 19:59:17.039359   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293 (perms=drwx------)
	I0930 19:59:17.039382   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 19:59:17.039389   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 19:59:17.039398   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 19:59:17.039404   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 19:59:17.039415   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 19:59:17.039420   26315 main.go:141] libmachine: (ha-805293) Creating domain...
	I0930 19:59:17.039450   26315 main.go:141] libmachine: (ha-805293) DBG | Writing SSH key tar header
	I0930 19:59:17.039468   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:17.039218   26338 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293 ...
	I0930 19:59:17.039478   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293
	I0930 19:59:17.039485   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 19:59:17.039546   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:59:17.039570   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 19:59:17.039613   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 19:59:17.039667   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins
	I0930 19:59:17.039707   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home
	I0930 19:59:17.039720   26315 main.go:141] libmachine: (ha-805293) DBG | Skipping /home - not owner
	I0930 19:59:17.040595   26315 main.go:141] libmachine: (ha-805293) define libvirt domain using xml: 
	I0930 19:59:17.040607   26315 main.go:141] libmachine: (ha-805293) <domain type='kvm'>
	I0930 19:59:17.040612   26315 main.go:141] libmachine: (ha-805293)   <name>ha-805293</name>
	I0930 19:59:17.040617   26315 main.go:141] libmachine: (ha-805293)   <memory unit='MiB'>2200</memory>
	I0930 19:59:17.040621   26315 main.go:141] libmachine: (ha-805293)   <vcpu>2</vcpu>
	I0930 19:59:17.040625   26315 main.go:141] libmachine: (ha-805293)   <features>
	I0930 19:59:17.040630   26315 main.go:141] libmachine: (ha-805293)     <acpi/>
	I0930 19:59:17.040633   26315 main.go:141] libmachine: (ha-805293)     <apic/>
	I0930 19:59:17.040638   26315 main.go:141] libmachine: (ha-805293)     <pae/>
	I0930 19:59:17.040642   26315 main.go:141] libmachine: (ha-805293)     
	I0930 19:59:17.040649   26315 main.go:141] libmachine: (ha-805293)   </features>
	I0930 19:59:17.040654   26315 main.go:141] libmachine: (ha-805293)   <cpu mode='host-passthrough'>
	I0930 19:59:17.040661   26315 main.go:141] libmachine: (ha-805293)   
	I0930 19:59:17.040664   26315 main.go:141] libmachine: (ha-805293)   </cpu>
	I0930 19:59:17.040671   26315 main.go:141] libmachine: (ha-805293)   <os>
	I0930 19:59:17.040675   26315 main.go:141] libmachine: (ha-805293)     <type>hvm</type>
	I0930 19:59:17.040680   26315 main.go:141] libmachine: (ha-805293)     <boot dev='cdrom'/>
	I0930 19:59:17.040692   26315 main.go:141] libmachine: (ha-805293)     <boot dev='hd'/>
	I0930 19:59:17.040703   26315 main.go:141] libmachine: (ha-805293)     <bootmenu enable='no'/>
	I0930 19:59:17.040714   26315 main.go:141] libmachine: (ha-805293)   </os>
	I0930 19:59:17.040724   26315 main.go:141] libmachine: (ha-805293)   <devices>
	I0930 19:59:17.040732   26315 main.go:141] libmachine: (ha-805293)     <disk type='file' device='cdrom'>
	I0930 19:59:17.040739   26315 main.go:141] libmachine: (ha-805293)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/boot2docker.iso'/>
	I0930 19:59:17.040757   26315 main.go:141] libmachine: (ha-805293)       <target dev='hdc' bus='scsi'/>
	I0930 19:59:17.040766   26315 main.go:141] libmachine: (ha-805293)       <readonly/>
	I0930 19:59:17.040770   26315 main.go:141] libmachine: (ha-805293)     </disk>
	I0930 19:59:17.040776   26315 main.go:141] libmachine: (ha-805293)     <disk type='file' device='disk'>
	I0930 19:59:17.040783   26315 main.go:141] libmachine: (ha-805293)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 19:59:17.040791   26315 main.go:141] libmachine: (ha-805293)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/ha-805293.rawdisk'/>
	I0930 19:59:17.040797   26315 main.go:141] libmachine: (ha-805293)       <target dev='hda' bus='virtio'/>
	I0930 19:59:17.040802   26315 main.go:141] libmachine: (ha-805293)     </disk>
	I0930 19:59:17.040808   26315 main.go:141] libmachine: (ha-805293)     <interface type='network'>
	I0930 19:59:17.040814   26315 main.go:141] libmachine: (ha-805293)       <source network='mk-ha-805293'/>
	I0930 19:59:17.040822   26315 main.go:141] libmachine: (ha-805293)       <model type='virtio'/>
	I0930 19:59:17.040829   26315 main.go:141] libmachine: (ha-805293)     </interface>
	I0930 19:59:17.040833   26315 main.go:141] libmachine: (ha-805293)     <interface type='network'>
	I0930 19:59:17.040840   26315 main.go:141] libmachine: (ha-805293)       <source network='default'/>
	I0930 19:59:17.040844   26315 main.go:141] libmachine: (ha-805293)       <model type='virtio'/>
	I0930 19:59:17.040850   26315 main.go:141] libmachine: (ha-805293)     </interface>
	I0930 19:59:17.040855   26315 main.go:141] libmachine: (ha-805293)     <serial type='pty'>
	I0930 19:59:17.040860   26315 main.go:141] libmachine: (ha-805293)       <target port='0'/>
	I0930 19:59:17.040865   26315 main.go:141] libmachine: (ha-805293)     </serial>
	I0930 19:59:17.040871   26315 main.go:141] libmachine: (ha-805293)     <console type='pty'>
	I0930 19:59:17.040877   26315 main.go:141] libmachine: (ha-805293)       <target type='serial' port='0'/>
	I0930 19:59:17.040882   26315 main.go:141] libmachine: (ha-805293)     </console>
	I0930 19:59:17.040888   26315 main.go:141] libmachine: (ha-805293)     <rng model='virtio'>
	I0930 19:59:17.040894   26315 main.go:141] libmachine: (ha-805293)       <backend model='random'>/dev/random</backend>
	I0930 19:59:17.040901   26315 main.go:141] libmachine: (ha-805293)     </rng>
	I0930 19:59:17.040907   26315 main.go:141] libmachine: (ha-805293)     
	I0930 19:59:17.040917   26315 main.go:141] libmachine: (ha-805293)     
	I0930 19:59:17.040925   26315 main.go:141] libmachine: (ha-805293)   </devices>
	I0930 19:59:17.040928   26315 main.go:141] libmachine: (ha-805293) </domain>
	I0930 19:59:17.040937   26315 main.go:141] libmachine: (ha-805293) 
	I0930 19:59:17.045576   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:16:26:46 in network default
	I0930 19:59:17.046091   26315 main.go:141] libmachine: (ha-805293) Ensuring networks are active...
	I0930 19:59:17.046110   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:17.046918   26315 main.go:141] libmachine: (ha-805293) Ensuring network default is active
	I0930 19:59:17.047170   26315 main.go:141] libmachine: (ha-805293) Ensuring network mk-ha-805293 is active
	I0930 19:59:17.048069   26315 main.go:141] libmachine: (ha-805293) Getting domain xml...
	I0930 19:59:17.048925   26315 main.go:141] libmachine: (ha-805293) Creating domain...
	I0930 19:59:18.262935   26315 main.go:141] libmachine: (ha-805293) Waiting to get IP...
	I0930 19:59:18.263713   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:18.264097   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:18.264150   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:18.264077   26338 retry.go:31] will retry after 272.130038ms: waiting for machine to come up
	I0930 19:59:18.537624   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:18.538207   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:18.538236   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:18.538152   26338 retry.go:31] will retry after 384.976128ms: waiting for machine to come up
	I0930 19:59:18.924813   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:18.925224   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:18.925244   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:18.925193   26338 retry.go:31] will retry after 439.036671ms: waiting for machine to come up
	I0930 19:59:19.365792   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:19.366237   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:19.366268   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:19.366201   26338 retry.go:31] will retry after 523.251996ms: waiting for machine to come up
	I0930 19:59:19.890884   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:19.891377   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:19.891399   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:19.891276   26338 retry.go:31] will retry after 505.591634ms: waiting for machine to come up
	I0930 19:59:20.398064   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:20.398495   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:20.398518   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:20.398434   26338 retry.go:31] will retry after 840.243199ms: waiting for machine to come up
	I0930 19:59:21.240528   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:21.240974   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:21.241011   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:21.240928   26338 retry.go:31] will retry after 727.422374ms: waiting for machine to come up
	I0930 19:59:21.970399   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:21.970994   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:21.971027   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:21.970937   26338 retry.go:31] will retry after 1.250553906s: waiting for machine to come up
	I0930 19:59:23.223257   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:23.223588   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:23.223617   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:23.223524   26338 retry.go:31] will retry after 1.498180761s: waiting for machine to come up
	I0930 19:59:24.724089   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:24.724526   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:24.724547   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:24.724490   26338 retry.go:31] will retry after 1.710980244s: waiting for machine to come up
	I0930 19:59:26.437365   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:26.437733   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:26.437791   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:26.437707   26338 retry.go:31] will retry after 1.996131833s: waiting for machine to come up
	I0930 19:59:28.435394   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:28.435899   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:28.435920   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:28.435854   26338 retry.go:31] will retry after 2.313700889s: waiting for machine to come up
	I0930 19:59:30.752853   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:30.753113   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:30.753140   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:30.753096   26338 retry.go:31] will retry after 2.892875975s: waiting for machine to come up
	I0930 19:59:33.648697   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:33.649006   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:33.649067   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:33.648958   26338 retry.go:31] will retry after 4.162794884s: waiting for machine to come up
	I0930 19:59:37.813324   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.813940   26315 main.go:141] libmachine: (ha-805293) Found IP for machine: 192.168.39.3
	I0930 19:59:37.813967   26315 main.go:141] libmachine: (ha-805293) Reserving static IP address...
	I0930 19:59:37.813980   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has current primary IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.814363   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find host DHCP lease matching {name: "ha-805293", mac: "52:54:00:a8:b8:c7", ip: "192.168.39.3"} in network mk-ha-805293
	I0930 19:59:37.894677   26315 main.go:141] libmachine: (ha-805293) DBG | Getting to WaitForSSH function...
	I0930 19:59:37.894706   26315 main.go:141] libmachine: (ha-805293) Reserved static IP address: 192.168.39.3
	I0930 19:59:37.894719   26315 main.go:141] libmachine: (ha-805293) Waiting for SSH to be available...
	I0930 19:59:37.897595   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.897922   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:37.897956   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.898087   26315 main.go:141] libmachine: (ha-805293) DBG | Using SSH client type: external
	I0930 19:59:37.898106   26315 main.go:141] libmachine: (ha-805293) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa (-rw-------)
	I0930 19:59:37.898139   26315 main.go:141] libmachine: (ha-805293) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 19:59:37.898155   26315 main.go:141] libmachine: (ha-805293) DBG | About to run SSH command:
	I0930 19:59:37.898169   26315 main.go:141] libmachine: (ha-805293) DBG | exit 0
	I0930 19:59:38.031893   26315 main.go:141] libmachine: (ha-805293) DBG | SSH cmd err, output: <nil>: 
	I0930 19:59:38.032180   26315 main.go:141] libmachine: (ha-805293) KVM machine creation complete!
	I0930 19:59:38.032650   26315 main.go:141] libmachine: (ha-805293) Calling .GetConfigRaw
	I0930 19:59:38.033332   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:38.033535   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:38.033703   26315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 19:59:38.033722   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 19:59:38.035148   26315 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 19:59:38.035166   26315 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 19:59:38.035171   26315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 19:59:38.035176   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.037430   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.037779   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.037807   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.037886   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.038058   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.038172   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.038292   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.038466   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.038732   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.038742   26315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 19:59:38.150707   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:59:38.150736   26315 main.go:141] libmachine: Detecting the provisioner...
	I0930 19:59:38.150744   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.153577   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.153985   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.154015   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.154165   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.154420   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.154616   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.154796   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.154961   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.155144   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.155155   26315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 19:59:38.268071   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 19:59:38.268223   26315 main.go:141] libmachine: found compatible host: buildroot
	I0930 19:59:38.268235   26315 main.go:141] libmachine: Provisioning with buildroot...
	I0930 19:59:38.268248   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:38.268485   26315 buildroot.go:166] provisioning hostname "ha-805293"
	I0930 19:59:38.268519   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:38.268699   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.271029   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.271351   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.271376   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.271551   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.271727   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.271905   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.272048   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.272215   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.272420   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.272431   26315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293 && echo "ha-805293" | sudo tee /etc/hostname
	I0930 19:59:38.397989   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293
	
	I0930 19:59:38.398019   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.401388   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.401792   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.401818   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.402043   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.402262   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.402446   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.402640   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.402835   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.403014   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.403030   26315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 19:59:38.523981   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:59:38.524025   26315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 19:59:38.524082   26315 buildroot.go:174] setting up certificates
	I0930 19:59:38.524097   26315 provision.go:84] configureAuth start
	I0930 19:59:38.524111   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:38.524383   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:38.527277   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.527630   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.527658   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.527836   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.530619   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.530940   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.530964   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.531100   26315 provision.go:143] copyHostCerts
	I0930 19:59:38.531123   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 19:59:38.531167   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 19:59:38.531177   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 19:59:38.531239   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 19:59:38.531347   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 19:59:38.531367   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 19:59:38.531371   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 19:59:38.531397   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 19:59:38.531451   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 19:59:38.531467   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 19:59:38.531473   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 19:59:38.531511   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 19:59:38.531604   26315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293 san=[127.0.0.1 192.168.39.3 ha-805293 localhost minikube]
	I0930 19:59:38.676763   26315 provision.go:177] copyRemoteCerts
	I0930 19:59:38.676824   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 19:59:38.676847   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.679571   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.680006   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.680032   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.680205   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.680392   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.680556   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.680720   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:38.765532   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 19:59:38.765609   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 19:59:38.789748   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 19:59:38.789818   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 19:59:38.811783   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 19:59:38.811868   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 19:59:38.834125   26315 provision.go:87] duration metric: took 310.01212ms to configureAuth
	I0930 19:59:38.834160   26315 buildroot.go:189] setting minikube options for container-runtime
	I0930 19:59:38.834431   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:59:38.834524   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.837303   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.837631   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.837775   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.838052   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.838232   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.838399   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.838530   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.838676   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.838897   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.838918   26315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 19:59:39.069352   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 19:59:39.069381   26315 main.go:141] libmachine: Checking connection to Docker...
	I0930 19:59:39.069395   26315 main.go:141] libmachine: (ha-805293) Calling .GetURL
	I0930 19:59:39.070641   26315 main.go:141] libmachine: (ha-805293) DBG | Using libvirt version 6000000
	I0930 19:59:39.073164   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.073482   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.073521   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.073664   26315 main.go:141] libmachine: Docker is up and running!
	I0930 19:59:39.073675   26315 main.go:141] libmachine: Reticulating splines...
	I0930 19:59:39.073688   26315 client.go:171] duration metric: took 22.519163927s to LocalClient.Create
	I0930 19:59:39.073710   26315 start.go:167] duration metric: took 22.519226404s to libmachine.API.Create "ha-805293"
	I0930 19:59:39.073725   26315 start.go:293] postStartSetup for "ha-805293" (driver="kvm2")
	I0930 19:59:39.073739   26315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 19:59:39.073759   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.073979   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 19:59:39.074068   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.076481   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.076820   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.076872   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.076969   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.077131   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.077256   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.077345   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:39.162144   26315 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 19:59:39.166524   26315 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 19:59:39.166551   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 19:59:39.166625   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 19:59:39.166691   26315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 19:59:39.166701   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 19:59:39.166826   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 19:59:39.175862   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 19:59:39.198495   26315 start.go:296] duration metric: took 124.748363ms for postStartSetup
	I0930 19:59:39.198552   26315 main.go:141] libmachine: (ha-805293) Calling .GetConfigRaw
	I0930 19:59:39.199175   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:39.202045   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.202447   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.202472   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.202702   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 19:59:39.202915   26315 start.go:128] duration metric: took 22.667085053s to createHost
	I0930 19:59:39.202950   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.205157   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.205495   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.205516   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.205668   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.205846   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.205981   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.206111   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.206270   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:39.206542   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:39.206565   26315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 19:59:39.320050   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727726379.295271539
	
	I0930 19:59:39.320076   26315 fix.go:216] guest clock: 1727726379.295271539
	I0930 19:59:39.320086   26315 fix.go:229] Guest: 2024-09-30 19:59:39.295271539 +0000 UTC Remote: 2024-09-30 19:59:39.202937168 +0000 UTC m=+22.774027114 (delta=92.334371ms)
	I0930 19:59:39.320118   26315 fix.go:200] guest clock delta is within tolerance: 92.334371ms
	I0930 19:59:39.320128   26315 start.go:83] releasing machines lock for "ha-805293", held for 22.784384982s
	I0930 19:59:39.320156   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.320464   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:39.323340   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.323749   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.323763   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.323980   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.324511   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.324710   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.324873   26315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 19:59:39.324922   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.324933   26315 ssh_runner.go:195] Run: cat /version.json
	I0930 19:59:39.324953   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.327479   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.327790   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.327833   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.327954   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.327975   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.328205   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.328371   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.328394   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.328435   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.328560   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.328620   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:39.328752   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.328910   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.329053   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:39.449869   26315 ssh_runner.go:195] Run: systemctl --version
	I0930 19:59:39.457140   26315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 19:59:39.620534   26315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 19:59:39.626812   26315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 19:59:39.626884   26315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 19:59:39.643150   26315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 19:59:39.643182   26315 start.go:495] detecting cgroup driver to use...
	I0930 19:59:39.643259   26315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 19:59:39.659582   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 19:59:39.673481   26315 docker.go:217] disabling cri-docker service (if available) ...
	I0930 19:59:39.673546   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 19:59:39.687166   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 19:59:39.700766   26315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 19:59:39.817845   26315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 19:59:39.989160   26315 docker.go:233] disabling docker service ...
	I0930 19:59:39.989251   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 19:59:40.003138   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 19:59:40.016004   26315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 19:59:40.149065   26315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 19:59:40.264254   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 19:59:40.278167   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 19:59:40.296364   26315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 19:59:40.296421   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.306661   26315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 19:59:40.306731   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.317138   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.327466   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.337951   26315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 19:59:40.348585   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.358684   26315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.375315   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.385587   26315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 19:59:40.394996   26315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 19:59:40.395092   26315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 19:59:40.408121   26315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 19:59:40.417783   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:59:40.532464   26315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 19:59:40.627203   26315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 19:59:40.627277   26315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 19:59:40.632142   26315 start.go:563] Will wait 60s for crictl version
	I0930 19:59:40.632198   26315 ssh_runner.go:195] Run: which crictl
	I0930 19:59:40.635892   26315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 19:59:40.673372   26315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 19:59:40.673453   26315 ssh_runner.go:195] Run: crio --version
	I0930 19:59:40.701810   26315 ssh_runner.go:195] Run: crio --version
	I0930 19:59:40.733603   26315 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 19:59:40.734810   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:40.737789   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:40.738162   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:40.738188   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:40.738414   26315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 19:59:40.742812   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:59:40.755762   26315 kubeadm.go:883] updating cluster {Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 19:59:40.755880   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:59:40.755941   26315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:59:40.795843   26315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 19:59:40.795919   26315 ssh_runner.go:195] Run: which lz4
	I0930 19:59:40.799847   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0930 19:59:40.799948   26315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 19:59:40.803954   26315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 19:59:40.803978   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 19:59:42.086885   26315 crio.go:462] duration metric: took 1.286971524s to copy over tarball
	I0930 19:59:42.086956   26315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 19:59:44.140911   26315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.053919148s)
	I0930 19:59:44.140946   26315 crio.go:469] duration metric: took 2.054033393s to extract the tarball
	I0930 19:59:44.140956   26315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 19:59:44.176934   26315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:59:44.223432   26315 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 19:59:44.223453   26315 cache_images.go:84] Images are preloaded, skipping loading
	I0930 19:59:44.223463   26315 kubeadm.go:934] updating node { 192.168.39.3 8443 v1.31.1 crio true true} ...
	I0930 19:59:44.223618   26315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 19:59:44.223687   26315 ssh_runner.go:195] Run: crio config
	I0930 19:59:44.267892   26315 cni.go:84] Creating CNI manager for ""
	I0930 19:59:44.267913   26315 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 19:59:44.267927   26315 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 19:59:44.267969   26315 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-805293 NodeName:ha-805293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 19:59:44.268143   26315 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-805293"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 19:59:44.268174   26315 kube-vip.go:115] generating kube-vip config ...
	I0930 19:59:44.268226   26315 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 19:59:44.290057   26315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 19:59:44.290186   26315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0930 19:59:44.290252   26315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 19:59:44.300619   26315 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 19:59:44.300694   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 19:59:44.312702   26315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0930 19:59:44.329980   26315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 19:59:44.347106   26315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0930 19:59:44.363429   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0930 19:59:44.379706   26315 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 19:59:44.383786   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:59:44.396392   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:59:44.511834   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 19:59:44.528890   26315 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.3
	I0930 19:59:44.528918   26315 certs.go:194] generating shared ca certs ...
	I0930 19:59:44.528990   26315 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.529203   26315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 19:59:44.529261   26315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 19:59:44.529273   26315 certs.go:256] generating profile certs ...
	I0930 19:59:44.529338   26315 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 19:59:44.529377   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt with IP's: []
	I0930 19:59:44.693203   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt ...
	I0930 19:59:44.693232   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt: {Name:mk4ee04dd06bd91d73f7f1298e33968b422b097c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.693403   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key ...
	I0930 19:59:44.693413   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key: {Name:mk2b8ad6c09983ddb0203e6dca1df4008d2fe717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.693487   26315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78
	I0930 19:59:44.693501   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.254]
	I0930 19:59:44.767682   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78 ...
	I0930 19:59:44.767709   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78: {Name:mkf1b16d36ab45268d051f89cfe928869656e760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.767864   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78 ...
	I0930 19:59:44.767875   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78: {Name:mk53eca62135b4c1b261b7c937012d89f293e976 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.767944   26315 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 19:59:44.768026   26315 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 19:59:44.768082   26315 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 19:59:44.768096   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt with IP's: []
	I0930 19:59:45.223535   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt ...
	I0930 19:59:45.223567   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt: {Name:mke738cc3ccc573243158c6f5e5f022828f32c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:45.223723   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key ...
	I0930 19:59:45.223733   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key: {Name:mkbfe8ac8fc7a409b1152c27d19ceb3cdc436834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:45.223814   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 19:59:45.223831   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 19:59:45.223844   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 19:59:45.223854   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 19:59:45.223865   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 19:59:45.223889   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 19:59:45.223908   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 19:59:45.223920   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 19:59:45.223964   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 19:59:45.224006   26315 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 19:59:45.224013   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 19:59:45.224036   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 19:59:45.224057   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 19:59:45.224083   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 19:59:45.224119   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 19:59:45.224143   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.224156   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.224168   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.224809   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 19:59:45.251773   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 19:59:45.283221   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 19:59:45.307169   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 19:59:45.340795   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 19:59:45.364921   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 19:59:45.388786   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 19:59:45.412412   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 19:59:45.437530   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 19:59:45.462538   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 19:59:45.486247   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 19:59:45.510070   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 19:59:45.527040   26315 ssh_runner.go:195] Run: openssl version
	I0930 19:59:45.532953   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 19:59:45.544314   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.548732   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.548808   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.554737   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 19:59:45.565237   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 19:59:45.576275   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.580833   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.580899   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.586723   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 19:59:45.597151   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 19:59:45.607829   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.612479   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.612538   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.618560   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 19:59:45.629886   26315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 19:59:45.634469   26315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 19:59:45.634548   26315 kubeadm.go:392] StartCluster: {Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:59:45.634646   26315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 19:59:45.634717   26315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 19:59:45.672608   26315 cri.go:89] found id: ""
	I0930 19:59:45.672680   26315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 19:59:45.682253   26315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 19:59:45.695746   26315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 19:59:45.707747   26315 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 19:59:45.707771   26315 kubeadm.go:157] found existing configuration files:
	
	I0930 19:59:45.707824   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 19:59:45.717218   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 19:59:45.717271   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 19:59:45.727134   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 19:59:45.736453   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 19:59:45.736514   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 19:59:45.746137   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 19:59:45.755226   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 19:59:45.755300   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 19:59:45.765188   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 19:59:45.774772   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 19:59:45.774830   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 19:59:45.784513   26315 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 19:59:45.891942   26315 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 19:59:45.891997   26315 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 19:59:45.998241   26315 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 19:59:45.998404   26315 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 19:59:45.998552   26315 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 19:59:46.014075   26315 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 19:59:46.112806   26315 out.go:235]   - Generating certificates and keys ...
	I0930 19:59:46.112955   26315 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 19:59:46.113026   26315 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 19:59:46.210951   26315 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 19:59:46.354582   26315 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 19:59:46.555785   26315 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 19:59:46.646311   26315 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 19:59:46.770735   26315 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 19:59:46.770873   26315 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-805293 localhost] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0930 19:59:47.044600   26315 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 19:59:47.044796   26315 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-805293 localhost] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0930 19:59:47.135575   26315 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 19:59:47.309550   26315 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 19:59:47.407346   26315 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 19:59:47.407491   26315 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 19:59:47.782301   26315 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 19:59:47.938840   26315 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 19:59:48.153368   26315 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 19:59:48.373848   26315 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 19:59:48.924719   26315 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 19:59:48.925435   26315 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 19:59:48.929527   26315 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 19:59:48.931731   26315 out.go:235]   - Booting up control plane ...
	I0930 19:59:48.931901   26315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 19:59:48.931984   26315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 19:59:48.932610   26315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 19:59:48.952672   26315 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 19:59:48.959981   26315 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 19:59:48.960193   26315 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 19:59:49.095726   26315 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 19:59:49.095850   26315 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 19:59:49.596721   26315 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.116798ms
	I0930 19:59:49.596826   26315 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 19:59:55.702855   26315 kubeadm.go:310] [api-check] The API server is healthy after 6.110016436s
	I0930 19:59:55.715163   26315 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 19:59:55.739975   26315 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 19:59:56.278812   26315 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 19:59:56.279051   26315 kubeadm.go:310] [mark-control-plane] Marking the node ha-805293 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 19:59:56.293005   26315 kubeadm.go:310] [bootstrap-token] Using token: p0s0d4.yc45k5nzuh1mipkz
	I0930 19:59:56.294535   26315 out.go:235]   - Configuring RBAC rules ...
	I0930 19:59:56.294681   26315 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 19:59:56.299474   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 19:59:56.308838   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 19:59:56.312908   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 19:59:56.320143   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 19:59:56.328834   26315 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 19:59:56.351618   26315 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 19:59:56.617778   26315 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 19:59:57.116458   26315 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 19:59:57.116486   26315 kubeadm.go:310] 
	I0930 19:59:57.116560   26315 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 19:59:57.116570   26315 kubeadm.go:310] 
	I0930 19:59:57.116674   26315 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 19:59:57.116685   26315 kubeadm.go:310] 
	I0930 19:59:57.116719   26315 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 19:59:57.116823   26315 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 19:59:57.116882   26315 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 19:59:57.116886   26315 kubeadm.go:310] 
	I0930 19:59:57.116955   26315 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 19:59:57.116980   26315 kubeadm.go:310] 
	I0930 19:59:57.117053   26315 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 19:59:57.117064   26315 kubeadm.go:310] 
	I0930 19:59:57.117137   26315 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 19:59:57.117202   26315 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 19:59:57.117263   26315 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 19:59:57.117268   26315 kubeadm.go:310] 
	I0930 19:59:57.117377   26315 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 19:59:57.117490   26315 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 19:59:57.117501   26315 kubeadm.go:310] 
	I0930 19:59:57.117607   26315 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p0s0d4.yc45k5nzuh1mipkz \
	I0930 19:59:57.117749   26315 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 19:59:57.117783   26315 kubeadm.go:310] 	--control-plane 
	I0930 19:59:57.117789   26315 kubeadm.go:310] 
	I0930 19:59:57.117912   26315 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 19:59:57.117922   26315 kubeadm.go:310] 
	I0930 19:59:57.117993   26315 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p0s0d4.yc45k5nzuh1mipkz \
	I0930 19:59:57.118080   26315 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 19:59:57.119219   26315 kubeadm.go:310] W0930 19:59:45.871969     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:59:57.119559   26315 kubeadm.go:310] W0930 19:59:45.872918     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:59:57.119653   26315 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 19:59:57.119676   26315 cni.go:84] Creating CNI manager for ""
	I0930 19:59:57.119684   26315 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 19:59:57.121508   26315 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0930 19:59:57.122778   26315 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0930 19:59:57.129018   26315 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0930 19:59:57.129033   26315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0930 19:59:57.148058   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0930 19:59:57.490355   26315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 19:59:57.490415   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:57.490422   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-805293 minikube.k8s.io/updated_at=2024_09_30T19_59_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=ha-805293 minikube.k8s.io/primary=true
	I0930 19:59:57.530433   26315 ops.go:34] apiserver oom_adj: -16
	I0930 19:59:57.632942   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:58.133232   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:58.633968   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:59.133876   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:59.633715   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:00.134062   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:00.633798   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:01.133378   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:01.219465   26315 kubeadm.go:1113] duration metric: took 3.729111543s to wait for elevateKubeSystemPrivileges
	I0930 20:00:01.219521   26315 kubeadm.go:394] duration metric: took 15.584976844s to StartCluster
	I0930 20:00:01.219559   26315 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:01.219656   26315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:00:01.220437   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:01.220719   26315 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:01.220739   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 20:00:01.220750   26315 start.go:241] waiting for startup goroutines ...
	I0930 20:00:01.220771   26315 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 20:00:01.220861   26315 addons.go:69] Setting storage-provisioner=true in profile "ha-805293"
	I0930 20:00:01.220890   26315 addons.go:234] Setting addon storage-provisioner=true in "ha-805293"
	I0930 20:00:01.220907   26315 addons.go:69] Setting default-storageclass=true in profile "ha-805293"
	I0930 20:00:01.220929   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:01.220943   26315 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-805293"
	I0930 20:00:01.220958   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:01.221373   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.221421   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.221455   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.221495   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.237192   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38991
	I0930 20:00:01.237232   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44093
	I0930 20:00:01.237724   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.237776   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.238255   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.238280   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.238371   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.238394   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.238662   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.238738   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.238902   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:01.239184   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.239227   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.241145   26315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:00:01.241484   26315 kapi.go:59] client config for ha-805293: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 20:00:01.242040   26315 cert_rotation.go:140] Starting client certificate rotation controller
	I0930 20:00:01.242321   26315 addons.go:234] Setting addon default-storageclass=true in "ha-805293"
	I0930 20:00:01.242364   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:01.242753   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.242800   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.255454   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34783
	I0930 20:00:01.255998   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.256626   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.256655   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.257008   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.257244   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:01.258602   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38221
	I0930 20:00:01.259101   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.259492   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:01.259705   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.259732   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.260119   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.260656   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.260698   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.261796   26315 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:00:01.263230   26315 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 20:00:01.263251   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 20:00:01.263275   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:01.266511   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.266953   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:01.266979   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.267159   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:01.267342   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:01.267495   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:01.267640   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:01.276774   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42613
	I0930 20:00:01.277256   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.277779   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.277808   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.278167   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.278348   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:01.279998   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:01.280191   26315 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 20:00:01.280204   26315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 20:00:01.280218   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:01.282743   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.283181   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:01.283205   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.283377   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:01.283566   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:01.283719   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:01.283866   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:01.308679   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 20:00:01.431260   26315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 20:00:01.433924   26315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 20:00:01.558490   26315 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0930 20:00:01.621587   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.621614   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.621883   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.621900   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.621908   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.621931   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.621995   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.622217   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.622234   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.622247   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.622328   26315 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 20:00:01.622377   26315 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 20:00:01.622485   26315 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0930 20:00:01.622496   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:01.622504   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:01.622508   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:01.630544   26315 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 20:00:01.631089   26315 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0930 20:00:01.631103   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:01.631110   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:01.631115   26315 round_trippers.go:473]     Content-Type: application/json
	I0930 20:00:01.631119   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:01.636731   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:00:01.636889   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.636905   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.637222   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.637249   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.637227   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.910454   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.910493   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.910790   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.910900   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.910916   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.910928   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.910933   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.911215   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.911245   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.911255   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.913341   26315 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0930 20:00:01.914640   26315 addons.go:510] duration metric: took 693.870653ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0930 20:00:01.914685   26315 start.go:246] waiting for cluster config update ...
	I0930 20:00:01.914700   26315 start.go:255] writing updated cluster config ...
	I0930 20:00:01.917528   26315 out.go:201] 
	I0930 20:00:01.919324   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:01.919441   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:00:01.921983   26315 out.go:177] * Starting "ha-805293-m02" control-plane node in "ha-805293" cluster
	I0930 20:00:01.923837   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:00:01.923877   26315 cache.go:56] Caching tarball of preloaded images
	I0930 20:00:01.924007   26315 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:00:01.924027   26315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:00:01.924140   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:00:01.924406   26315 start.go:360] acquireMachinesLock for ha-805293-m02: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:00:01.924476   26315 start.go:364] duration metric: took 42.723µs to acquireMachinesLock for "ha-805293-m02"
	I0930 20:00:01.924503   26315 start.go:93] Provisioning new machine with config: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:01.924602   26315 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0930 20:00:01.926254   26315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 20:00:01.926373   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.926422   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.942099   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43055
	I0930 20:00:01.942642   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.943165   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.943189   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.943522   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.943810   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:01.943943   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:01.944136   26315 start.go:159] libmachine.API.Create for "ha-805293" (driver="kvm2")
	I0930 20:00:01.944171   26315 client.go:168] LocalClient.Create starting
	I0930 20:00:01.944215   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 20:00:01.944259   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:00:01.944280   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:00:01.944361   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 20:00:01.944395   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:00:01.944410   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:00:01.944433   26315 main.go:141] libmachine: Running pre-create checks...
	I0930 20:00:01.944443   26315 main.go:141] libmachine: (ha-805293-m02) Calling .PreCreateCheck
	I0930 20:00:01.944614   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetConfigRaw
	I0930 20:00:01.945016   26315 main.go:141] libmachine: Creating machine...
	I0930 20:00:01.945030   26315 main.go:141] libmachine: (ha-805293-m02) Calling .Create
	I0930 20:00:01.945196   26315 main.go:141] libmachine: (ha-805293-m02) Creating KVM machine...
	I0930 20:00:01.946629   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found existing default KVM network
	I0930 20:00:01.946731   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found existing private KVM network mk-ha-805293
	I0930 20:00:01.946865   26315 main.go:141] libmachine: (ha-805293-m02) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02 ...
	I0930 20:00:01.946894   26315 main.go:141] libmachine: (ha-805293-m02) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 20:00:01.946988   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:01.946872   26664 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:00:01.947079   26315 main.go:141] libmachine: (ha-805293-m02) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 20:00:02.217368   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:02.217234   26664 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa...
	I0930 20:00:02.510082   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:02.509926   26664 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/ha-805293-m02.rawdisk...
	I0930 20:00:02.510127   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Writing magic tar header
	I0930 20:00:02.510145   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Writing SSH key tar header
	I0930 20:00:02.510158   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:02.510035   26664 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02 ...
	I0930 20:00:02.510175   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02
	I0930 20:00:02.510188   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 20:00:02.510199   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:00:02.510217   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02 (perms=drwx------)
	I0930 20:00:02.510229   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 20:00:02.510240   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 20:00:02.510255   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 20:00:02.510266   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 20:00:02.510281   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 20:00:02.510294   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 20:00:02.510308   26315 main.go:141] libmachine: (ha-805293-m02) Creating domain...
	I0930 20:00:02.510328   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 20:00:02.510352   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins
	I0930 20:00:02.510359   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home
	I0930 20:00:02.510364   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Skipping /home - not owner
	I0930 20:00:02.511282   26315 main.go:141] libmachine: (ha-805293-m02) define libvirt domain using xml: 
	I0930 20:00:02.511306   26315 main.go:141] libmachine: (ha-805293-m02) <domain type='kvm'>
	I0930 20:00:02.511317   26315 main.go:141] libmachine: (ha-805293-m02)   <name>ha-805293-m02</name>
	I0930 20:00:02.511328   26315 main.go:141] libmachine: (ha-805293-m02)   <memory unit='MiB'>2200</memory>
	I0930 20:00:02.511338   26315 main.go:141] libmachine: (ha-805293-m02)   <vcpu>2</vcpu>
	I0930 20:00:02.511348   26315 main.go:141] libmachine: (ha-805293-m02)   <features>
	I0930 20:00:02.511357   26315 main.go:141] libmachine: (ha-805293-m02)     <acpi/>
	I0930 20:00:02.511364   26315 main.go:141] libmachine: (ha-805293-m02)     <apic/>
	I0930 20:00:02.511371   26315 main.go:141] libmachine: (ha-805293-m02)     <pae/>
	I0930 20:00:02.511377   26315 main.go:141] libmachine: (ha-805293-m02)     
	I0930 20:00:02.511388   26315 main.go:141] libmachine: (ha-805293-m02)   </features>
	I0930 20:00:02.511395   26315 main.go:141] libmachine: (ha-805293-m02)   <cpu mode='host-passthrough'>
	I0930 20:00:02.511405   26315 main.go:141] libmachine: (ha-805293-m02)   
	I0930 20:00:02.511416   26315 main.go:141] libmachine: (ha-805293-m02)   </cpu>
	I0930 20:00:02.511444   26315 main.go:141] libmachine: (ha-805293-m02)   <os>
	I0930 20:00:02.511468   26315 main.go:141] libmachine: (ha-805293-m02)     <type>hvm</type>
	I0930 20:00:02.511481   26315 main.go:141] libmachine: (ha-805293-m02)     <boot dev='cdrom'/>
	I0930 20:00:02.511494   26315 main.go:141] libmachine: (ha-805293-m02)     <boot dev='hd'/>
	I0930 20:00:02.511505   26315 main.go:141] libmachine: (ha-805293-m02)     <bootmenu enable='no'/>
	I0930 20:00:02.511512   26315 main.go:141] libmachine: (ha-805293-m02)   </os>
	I0930 20:00:02.511517   26315 main.go:141] libmachine: (ha-805293-m02)   <devices>
	I0930 20:00:02.511535   26315 main.go:141] libmachine: (ha-805293-m02)     <disk type='file' device='cdrom'>
	I0930 20:00:02.511552   26315 main.go:141] libmachine: (ha-805293-m02)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/boot2docker.iso'/>
	I0930 20:00:02.511561   26315 main.go:141] libmachine: (ha-805293-m02)       <target dev='hdc' bus='scsi'/>
	I0930 20:00:02.511591   26315 main.go:141] libmachine: (ha-805293-m02)       <readonly/>
	I0930 20:00:02.511613   26315 main.go:141] libmachine: (ha-805293-m02)     </disk>
	I0930 20:00:02.511630   26315 main.go:141] libmachine: (ha-805293-m02)     <disk type='file' device='disk'>
	I0930 20:00:02.511644   26315 main.go:141] libmachine: (ha-805293-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 20:00:02.511661   26315 main.go:141] libmachine: (ha-805293-m02)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/ha-805293-m02.rawdisk'/>
	I0930 20:00:02.511673   26315 main.go:141] libmachine: (ha-805293-m02)       <target dev='hda' bus='virtio'/>
	I0930 20:00:02.511692   26315 main.go:141] libmachine: (ha-805293-m02)     </disk>
	I0930 20:00:02.511711   26315 main.go:141] libmachine: (ha-805293-m02)     <interface type='network'>
	I0930 20:00:02.511729   26315 main.go:141] libmachine: (ha-805293-m02)       <source network='mk-ha-805293'/>
	I0930 20:00:02.511746   26315 main.go:141] libmachine: (ha-805293-m02)       <model type='virtio'/>
	I0930 20:00:02.511758   26315 main.go:141] libmachine: (ha-805293-m02)     </interface>
	I0930 20:00:02.511769   26315 main.go:141] libmachine: (ha-805293-m02)     <interface type='network'>
	I0930 20:00:02.511784   26315 main.go:141] libmachine: (ha-805293-m02)       <source network='default'/>
	I0930 20:00:02.511795   26315 main.go:141] libmachine: (ha-805293-m02)       <model type='virtio'/>
	I0930 20:00:02.511824   26315 main.go:141] libmachine: (ha-805293-m02)     </interface>
	I0930 20:00:02.511843   26315 main.go:141] libmachine: (ha-805293-m02)     <serial type='pty'>
	I0930 20:00:02.511853   26315 main.go:141] libmachine: (ha-805293-m02)       <target port='0'/>
	I0930 20:00:02.511862   26315 main.go:141] libmachine: (ha-805293-m02)     </serial>
	I0930 20:00:02.511870   26315 main.go:141] libmachine: (ha-805293-m02)     <console type='pty'>
	I0930 20:00:02.511881   26315 main.go:141] libmachine: (ha-805293-m02)       <target type='serial' port='0'/>
	I0930 20:00:02.511892   26315 main.go:141] libmachine: (ha-805293-m02)     </console>
	I0930 20:00:02.511901   26315 main.go:141] libmachine: (ha-805293-m02)     <rng model='virtio'>
	I0930 20:00:02.511910   26315 main.go:141] libmachine: (ha-805293-m02)       <backend model='random'>/dev/random</backend>
	I0930 20:00:02.511924   26315 main.go:141] libmachine: (ha-805293-m02)     </rng>
	I0930 20:00:02.511933   26315 main.go:141] libmachine: (ha-805293-m02)     
	I0930 20:00:02.511939   26315 main.go:141] libmachine: (ha-805293-m02)     
	I0930 20:00:02.511949   26315 main.go:141] libmachine: (ha-805293-m02)   </devices>
	I0930 20:00:02.511958   26315 main.go:141] libmachine: (ha-805293-m02) </domain>
	I0930 20:00:02.511969   26315 main.go:141] libmachine: (ha-805293-m02) 
	I0930 20:00:02.519423   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:35:68:69 in network default
	I0930 20:00:02.520096   26315 main.go:141] libmachine: (ha-805293-m02) Ensuring networks are active...
	I0930 20:00:02.520113   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:02.521080   26315 main.go:141] libmachine: (ha-805293-m02) Ensuring network default is active
	I0930 20:00:02.521471   26315 main.go:141] libmachine: (ha-805293-m02) Ensuring network mk-ha-805293 is active
	I0930 20:00:02.521811   26315 main.go:141] libmachine: (ha-805293-m02) Getting domain xml...
	I0930 20:00:02.522473   26315 main.go:141] libmachine: (ha-805293-m02) Creating domain...
	I0930 20:00:03.765540   26315 main.go:141] libmachine: (ha-805293-m02) Waiting to get IP...
	I0930 20:00:03.766353   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:03.766729   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:03.766750   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:03.766699   26664 retry.go:31] will retry after 241.920356ms: waiting for machine to come up
	I0930 20:00:04.010129   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:04.010801   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:04.010826   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:04.010761   26664 retry.go:31] will retry after 344.430245ms: waiting for machine to come up
	I0930 20:00:04.356311   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:04.356795   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:04.356815   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:04.356767   26664 retry.go:31] will retry after 377.488147ms: waiting for machine to come up
	I0930 20:00:04.736359   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:04.736817   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:04.736839   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:04.736768   26664 retry.go:31] will retry after 400.421105ms: waiting for machine to come up
	I0930 20:00:05.138514   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:05.139019   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:05.139050   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:05.138967   26664 retry.go:31] will retry after 547.144087ms: waiting for machine to come up
	I0930 20:00:05.688116   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:05.688838   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:05.688865   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:05.688769   26664 retry.go:31] will retry after 610.482897ms: waiting for machine to come up
	I0930 20:00:06.301403   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:06.301917   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:06.301945   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:06.301866   26664 retry.go:31] will retry after 792.553977ms: waiting for machine to come up
	I0930 20:00:07.096834   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:07.097300   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:07.097331   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:07.097234   26664 retry.go:31] will retry after 1.20008256s: waiting for machine to come up
	I0930 20:00:08.299714   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:08.300169   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:08.300191   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:08.300137   26664 retry.go:31] will retry after 1.678792143s: waiting for machine to come up
	I0930 20:00:09.980216   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:09.980657   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:09.980685   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:09.980618   26664 retry.go:31] will retry after 2.098959289s: waiting for machine to come up
	I0930 20:00:12.080886   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:12.081433   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:12.081474   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:12.081377   26664 retry.go:31] will retry after 2.748866897s: waiting for machine to come up
	I0930 20:00:14.833188   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:14.833722   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:14.833748   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:14.833682   26664 retry.go:31] will retry after 2.379918836s: waiting for machine to come up
	I0930 20:00:17.215678   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:17.216060   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:17.216093   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:17.215999   26664 retry.go:31] will retry after 4.355514313s: waiting for machine to come up
	I0930 20:00:21.576523   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.577032   26315 main.go:141] libmachine: (ha-805293-m02) Found IP for machine: 192.168.39.220
	I0930 20:00:21.577053   26315 main.go:141] libmachine: (ha-805293-m02) Reserving static IP address...
	I0930 20:00:21.577065   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has current primary IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.577388   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find host DHCP lease matching {name: "ha-805293-m02", mac: "52:54:00:fe:f4:56", ip: "192.168.39.220"} in network mk-ha-805293
	I0930 20:00:21.655408   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Getting to WaitForSSH function...
	I0930 20:00:21.655444   26315 main.go:141] libmachine: (ha-805293-m02) Reserved static IP address: 192.168.39.220
	I0930 20:00:21.655509   26315 main.go:141] libmachine: (ha-805293-m02) Waiting for SSH to be available...
	I0930 20:00:21.658005   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.658453   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:21.658491   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.658732   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Using SSH client type: external
	I0930 20:00:21.658759   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa (-rw-------)
	I0930 20:00:21.658792   26315 main.go:141] libmachine: (ha-805293-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 20:00:21.658808   26315 main.go:141] libmachine: (ha-805293-m02) DBG | About to run SSH command:
	I0930 20:00:21.658825   26315 main.go:141] libmachine: (ha-805293-m02) DBG | exit 0
	I0930 20:00:21.787681   26315 main.go:141] libmachine: (ha-805293-m02) DBG | SSH cmd err, output: <nil>: 
	I0930 20:00:21.788011   26315 main.go:141] libmachine: (ha-805293-m02) KVM machine creation complete!
	I0930 20:00:21.788252   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetConfigRaw
	I0930 20:00:21.788786   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:21.788970   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:21.789203   26315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 20:00:21.789220   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetState
	I0930 20:00:21.790562   26315 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 20:00:21.790578   26315 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 20:00:21.790584   26315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 20:00:21.790592   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:21.792832   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.793247   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:21.793275   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.793444   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:21.793624   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.793794   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.793936   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:21.794099   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:21.794370   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:21.794384   26315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 20:00:21.906923   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:00:21.906949   26315 main.go:141] libmachine: Detecting the provisioner...
	I0930 20:00:21.906961   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:21.910153   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.910565   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:21.910596   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.910764   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:21.910979   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.911241   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.911375   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:21.911534   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:21.911713   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:21.911726   26315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 20:00:22.024080   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 20:00:22.024153   26315 main.go:141] libmachine: found compatible host: buildroot
	I0930 20:00:22.024160   26315 main.go:141] libmachine: Provisioning with buildroot...
	I0930 20:00:22.024170   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:22.024471   26315 buildroot.go:166] provisioning hostname "ha-805293-m02"
	I0930 20:00:22.024504   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:22.024708   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.027328   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.027816   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.027846   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.028043   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.028244   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.028415   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.028559   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.028711   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.028924   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.028951   26315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293-m02 && echo "ha-805293-m02" | sudo tee /etc/hostname
	I0930 20:00:22.153517   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293-m02
	
	I0930 20:00:22.153558   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.156342   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.156867   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.156892   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.157066   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.157250   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.157398   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.157520   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.157658   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.157834   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.157856   26315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:00:22.280453   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:00:22.280490   26315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:00:22.280513   26315 buildroot.go:174] setting up certificates
	I0930 20:00:22.280524   26315 provision.go:84] configureAuth start
	I0930 20:00:22.280537   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:22.280873   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:22.283731   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.284096   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.284121   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.284311   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.286698   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.287078   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.287108   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.287262   26315 provision.go:143] copyHostCerts
	I0930 20:00:22.287296   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:00:22.287337   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:00:22.287351   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:00:22.287424   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:00:22.287503   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:00:22.287521   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:00:22.287557   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:00:22.287594   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:00:22.287648   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:00:22.287664   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:00:22.287668   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:00:22.287689   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:00:22.287737   26315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293-m02 san=[127.0.0.1 192.168.39.220 ha-805293-m02 localhost minikube]
	I0930 20:00:22.355076   26315 provision.go:177] copyRemoteCerts
	I0930 20:00:22.355131   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:00:22.355153   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.357993   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.358290   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.358317   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.358695   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.358872   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.358992   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.359090   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:22.445399   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 20:00:22.445470   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:00:22.469429   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 20:00:22.469516   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 20:00:22.492675   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 20:00:22.492763   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 20:00:22.515601   26315 provision.go:87] duration metric: took 235.062596ms to configureAuth
	I0930 20:00:22.515633   26315 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:00:22.515833   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:22.515926   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.518627   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.519062   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.519101   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.519248   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.519447   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.519617   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.519768   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.519918   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.520077   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.520090   26315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:00:22.744066   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:00:22.744092   26315 main.go:141] libmachine: Checking connection to Docker...
	I0930 20:00:22.744101   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetURL
	I0930 20:00:22.745446   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Using libvirt version 6000000
	I0930 20:00:22.747635   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.748132   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.748161   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.748303   26315 main.go:141] libmachine: Docker is up and running!
	I0930 20:00:22.748319   26315 main.go:141] libmachine: Reticulating splines...
	I0930 20:00:22.748327   26315 client.go:171] duration metric: took 20.804148382s to LocalClient.Create
	I0930 20:00:22.748348   26315 start.go:167] duration metric: took 20.804213197s to libmachine.API.Create "ha-805293"
	I0930 20:00:22.748357   26315 start.go:293] postStartSetup for "ha-805293-m02" (driver="kvm2")
	I0930 20:00:22.748367   26315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:00:22.748386   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:22.748624   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:00:22.748654   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.750830   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.751166   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.751190   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.751299   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.751468   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.751612   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.751720   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:22.837496   26315 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:00:22.841510   26315 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:00:22.841546   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:00:22.841623   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:00:22.841717   26315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:00:22.841730   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 20:00:22.841843   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:00:22.851144   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:00:22.877058   26315 start.go:296] duration metric: took 128.687557ms for postStartSetup
	I0930 20:00:22.877104   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetConfigRaw
	I0930 20:00:22.877761   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:22.880570   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.880908   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.880931   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.881333   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:00:22.881547   26315 start.go:128] duration metric: took 20.956931205s to createHost
	I0930 20:00:22.881569   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.883882   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.884228   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.884246   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.884419   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.884601   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.884779   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.884913   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.885087   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.885252   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.885264   26315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:00:23.000299   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727726422.960119850
	
	I0930 20:00:23.000326   26315 fix.go:216] guest clock: 1727726422.960119850
	I0930 20:00:23.000338   26315 fix.go:229] Guest: 2024-09-30 20:00:22.96011985 +0000 UTC Remote: 2024-09-30 20:00:22.881558413 +0000 UTC m=+66.452648359 (delta=78.561437ms)
	I0930 20:00:23.000357   26315 fix.go:200] guest clock delta is within tolerance: 78.561437ms
	I0930 20:00:23.000364   26315 start.go:83] releasing machines lock for "ha-805293-m02", held for 21.075876017s
	I0930 20:00:23.000382   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.000682   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:23.003439   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.003855   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:23.003882   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.006309   26315 out.go:177] * Found network options:
	I0930 20:00:23.008016   26315 out.go:177]   - NO_PROXY=192.168.39.3
	W0930 20:00:23.009484   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:00:23.009519   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.010257   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.010450   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.010558   26315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:00:23.010606   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	W0930 20:00:23.010646   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:00:23.010724   26315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:00:23.010747   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:23.013581   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.013752   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.013960   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:23.013983   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.014161   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:23.014186   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:23.014187   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.014404   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:23.014410   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:23.014563   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:23.014595   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:23.014659   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:23.014695   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:23.014791   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:23.259199   26315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:00:23.264710   26315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:00:23.264772   26315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:00:23.281650   26315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 20:00:23.281678   26315 start.go:495] detecting cgroup driver to use...
	I0930 20:00:23.281745   26315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:00:23.300954   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:00:23.318197   26315 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:00:23.318266   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:00:23.334729   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:00:23.351325   26315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:00:23.494840   26315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:00:23.659365   26315 docker.go:233] disabling docker service ...
	I0930 20:00:23.659442   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:00:23.673200   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:00:23.686244   26315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:00:23.816616   26315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:00:23.949421   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:00:23.963035   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:00:23.981793   26315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:00:23.981869   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:23.992506   26315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:00:23.992572   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.003215   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.013791   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.024890   26315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:00:24.036504   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.046845   26315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.063744   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.074710   26315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:00:24.084399   26315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 20:00:24.084456   26315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 20:00:24.097779   26315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:00:24.107679   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:00:24.245414   26315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:00:24.332691   26315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:00:24.332763   26315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:00:24.337609   26315 start.go:563] Will wait 60s for crictl version
	I0930 20:00:24.337672   26315 ssh_runner.go:195] Run: which crictl
	I0930 20:00:24.341369   26315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:00:24.379294   26315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:00:24.379384   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:00:24.407964   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:00:24.438040   26315 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:00:24.439799   26315 out.go:177]   - env NO_PROXY=192.168.39.3
	I0930 20:00:24.441127   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:24.443641   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:24.443999   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:24.444023   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:24.444256   26315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:00:24.448441   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:00:24.460479   26315 mustload.go:65] Loading cluster: ha-805293
	I0930 20:00:24.460673   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:24.460911   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:24.460946   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:24.475845   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0930 20:00:24.476505   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:24.476991   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:24.477013   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:24.477336   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:24.477545   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:24.479156   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:24.479566   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:24.479614   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:24.494163   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I0930 20:00:24.494690   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:24.495134   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:24.495156   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:24.495462   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:24.495672   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:24.495840   26315 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.220
	I0930 20:00:24.495854   26315 certs.go:194] generating shared ca certs ...
	I0930 20:00:24.495872   26315 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:24.495990   26315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:00:24.496030   26315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:00:24.496038   26315 certs.go:256] generating profile certs ...
	I0930 20:00:24.496099   26315 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 20:00:24.496121   26315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032
	I0930 20:00:24.496134   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.220 192.168.39.254]
	I0930 20:00:24.563341   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032 ...
	I0930 20:00:24.563370   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032: {Name:mk8534a0b1f65471035122400012ca9f075cb68b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:24.563553   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032 ...
	I0930 20:00:24.563580   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032: {Name:mkdff9b5cf02688bad7cef701430e9d45f427c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:24.563669   26315 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 20:00:24.563804   26315 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 20:00:24.563922   26315 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 20:00:24.563935   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 20:00:24.563949   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 20:00:24.563961   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 20:00:24.563971   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 20:00:24.563981   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 20:00:24.563992   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 20:00:24.564001   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 20:00:24.564012   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 20:00:24.564058   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:00:24.564087   26315 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:00:24.564096   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:00:24.564116   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:00:24.564137   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:00:24.564157   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:00:24.564196   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:00:24.564221   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 20:00:24.564233   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 20:00:24.564246   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:24.564276   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:24.567674   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:24.568209   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:24.568244   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:24.568458   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:24.568679   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:24.568859   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:24.569017   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:24.647988   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 20:00:24.652578   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 20:00:24.663570   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 20:00:24.667502   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 20:00:24.678300   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 20:00:24.682636   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 20:00:24.692556   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 20:00:24.697407   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0930 20:00:24.708600   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 20:00:24.716272   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 20:00:24.726239   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 20:00:24.730151   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0930 20:00:24.740007   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:00:24.764135   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:00:24.787511   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:00:24.811921   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:00:24.835050   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0930 20:00:24.858111   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 20:00:24.881164   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:00:24.905084   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 20:00:24.930204   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:00:24.954976   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:00:24.979893   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:00:25.004028   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 20:00:25.020509   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 20:00:25.037112   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 20:00:25.053614   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0930 20:00:25.069699   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 20:00:25.087062   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0930 20:00:25.103141   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 20:00:25.119089   26315 ssh_runner.go:195] Run: openssl version
	I0930 20:00:25.124587   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:00:25.135122   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:00:25.139645   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:00:25.139709   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:00:25.145556   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:00:25.156636   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:00:25.167339   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:00:25.171719   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:00:25.171780   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:00:25.177212   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:00:25.188055   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:00:25.199114   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:25.203444   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:25.203514   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:25.209227   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:00:25.220164   26315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:00:25.224532   26315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 20:00:25.224591   26315 kubeadm.go:934] updating node {m02 192.168.39.220 8443 v1.31.1 crio true true} ...
	I0930 20:00:25.224694   26315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:00:25.224719   26315 kube-vip.go:115] generating kube-vip config ...
	I0930 20:00:25.224757   26315 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 20:00:25.242207   26315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 20:00:25.242306   26315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 20:00:25.242370   26315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:00:25.253224   26315 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 20:00:25.253326   26315 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 20:00:25.264511   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 20:00:25.264547   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:00:25.264590   26315 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0930 20:00:25.264606   26315 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0930 20:00:25.264613   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:00:25.269385   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 20:00:25.269423   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 20:00:26.288255   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:00:26.288359   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:00:26.293355   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 20:00:26.293391   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 20:00:26.370842   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:00:26.408125   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:00:26.408233   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:00:26.414764   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 20:00:26.414804   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 20:00:26.848584   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 20:00:26.858015   26315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 20:00:26.874053   26315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:00:26.890616   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 20:00:26.906680   26315 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 20:00:26.910431   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:00:26.921656   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:00:27.039123   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:00:27.056773   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:27.057124   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:27.057173   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:27.072237   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I0930 20:00:27.072852   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:27.073292   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:27.073321   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:27.073651   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:27.073859   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:27.073989   26315 start.go:317] joinCluster: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:00:27.074091   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 20:00:27.074108   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:27.076745   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:27.077111   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:27.077130   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:27.077207   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:27.077370   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:27.077633   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:27.077784   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:27.230308   26315 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:27.230355   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cnuzai.6xkseww2aia5hxhb --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m02 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443"
	I0930 20:00:50.312960   26315 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cnuzai.6xkseww2aia5hxhb --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m02 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443": (23.082567099s)
	I0930 20:00:50.313004   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 20:00:50.837990   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-805293-m02 minikube.k8s.io/updated_at=2024_09_30T20_00_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=ha-805293 minikube.k8s.io/primary=false
	I0930 20:00:50.975697   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-805293-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 20:00:51.102316   26315 start.go:319] duration metric: took 24.028319202s to joinCluster
	I0930 20:00:51.102444   26315 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:51.102695   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:51.104462   26315 out.go:177] * Verifying Kubernetes components...
	I0930 20:00:51.105980   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:00:51.368169   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:00:51.414670   26315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:00:51.415012   26315 kapi.go:59] client config for ha-805293: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 20:00:51.415098   26315 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.3:8443
	I0930 20:00:51.415444   26315 node_ready.go:35] waiting up to 6m0s for node "ha-805293-m02" to be "Ready" ...
	I0930 20:00:51.415604   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:51.415616   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:51.415627   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:51.415634   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:51.426106   26315 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 20:00:51.915725   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:51.915750   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:51.915764   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:51.915771   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:51.920139   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:52.416072   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:52.416092   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:52.416100   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:52.416104   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:52.419738   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:52.915687   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:52.915720   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:52.915733   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:52.915739   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:52.920070   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:53.415992   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:53.416013   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:53.416021   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:53.416027   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:53.419709   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:53.420257   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:00:53.915641   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:53.915662   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:53.915670   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:53.915675   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:53.918936   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:54.415947   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:54.415969   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:54.415978   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:54.415983   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:54.419470   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:54.916559   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:54.916594   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:54.916604   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:54.916609   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:54.920769   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:55.415723   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:55.415749   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:55.415760   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:55.415767   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:55.419960   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:55.420655   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:00:55.915703   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:55.915725   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:55.915732   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:55.915737   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:55.918792   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:56.415726   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:56.415759   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:56.415768   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:56.415771   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:56.419845   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:56.915720   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:56.915749   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:56.915761   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:56.915768   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:56.919114   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:57.415890   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:57.415920   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:57.415930   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:57.415936   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:57.419326   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:57.916001   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:57.916024   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:57.916032   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:57.916036   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:57.919385   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:57.920066   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:00:58.416036   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:58.416058   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:58.416066   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:58.416071   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:58.444113   26315 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0930 20:00:58.915821   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:58.915851   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:58.915865   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:58.915872   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:58.919943   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:59.415861   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:59.415883   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:59.415892   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:59.415896   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:59.419554   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:59.916644   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:59.916665   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:59.916673   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:59.916681   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:59.920228   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:59.920834   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:00.415729   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:00.415764   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:00.415772   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:00.415777   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:00.419232   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:00.915725   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:00.915748   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:00.915758   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:00.915764   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:00.920882   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:01:01.416215   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:01.416240   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:01.416249   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:01.416252   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:01.419889   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:01.916651   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:01.916673   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:01.916680   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:01.916686   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:01.920422   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:01.920906   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:02.416417   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:02.416447   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:02.416458   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:02.416465   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:02.420384   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:02.916614   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:02.916639   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:02.916647   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:02.916651   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:02.920435   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:03.416222   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:03.416246   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:03.416255   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:03.416258   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:03.419787   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:03.915698   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:03.915726   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:03.915735   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:03.915739   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:03.919427   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:04.415764   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:04.415788   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:04.415797   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:04.415801   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:04.419012   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:04.419574   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:04.915824   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:04.915846   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:04.915855   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:04.915859   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:04.920091   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:05.415756   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:05.415780   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:05.415787   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:05.415791   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:05.421271   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:01:05.915718   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:05.915739   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:05.915747   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:05.915751   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:05.919141   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:06.415741   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:06.415762   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:06.415770   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:06.415774   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:06.418886   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:06.419650   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:06.916104   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:06.916133   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:06.916144   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:06.916149   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:06.919406   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:07.416605   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:07.416630   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:07.416639   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:07.416646   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:07.419940   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:07.915753   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:07.915780   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:07.915790   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:07.915795   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:07.919449   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:08.416606   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:08.416630   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:08.416638   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:08.416643   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:08.420794   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:08.421339   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:08.915715   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:08.915738   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:08.915746   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:08.915752   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:08.919389   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.416586   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:09.416611   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.416621   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.416628   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.419914   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.916640   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:09.916661   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.916669   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.916673   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.919743   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.920355   26315 node_ready.go:49] node "ha-805293-m02" has status "Ready":"True"
	I0930 20:01:09.920385   26315 node_ready.go:38] duration metric: took 18.504913608s for node "ha-805293-m02" to be "Ready" ...
	I0930 20:01:09.920395   26315 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:01:09.920461   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:09.920470   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.920477   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.920481   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.924944   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:09.930623   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.930723   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-x7zjp
	I0930 20:01:09.930731   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.930739   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.930743   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.933787   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.934467   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:09.934486   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.934497   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.934502   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.936935   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.937372   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.937389   26315 pod_ready.go:82] duration metric: took 6.738618ms for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.937399   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.937452   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-z4bkv
	I0930 20:01:09.937460   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.937467   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.937471   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.939718   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.940345   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:09.940360   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.940367   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.940372   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.942825   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.943347   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.943362   26315 pod_ready.go:82] duration metric: took 5.957941ms for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.943374   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.943449   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293
	I0930 20:01:09.943477   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.943493   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.943502   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.946145   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.946815   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:09.946829   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.946837   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.946841   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.949619   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.950200   26315 pod_ready.go:93] pod "etcd-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.950222   26315 pod_ready.go:82] duration metric: took 6.836708ms for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.950233   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.950305   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m02
	I0930 20:01:09.950326   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.950334   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.950340   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.953306   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.953792   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:09.953806   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.953813   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.953817   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.956400   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.956812   26315 pod_ready.go:93] pod "etcd-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.956829   26315 pod_ready.go:82] duration metric: took 6.588184ms for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.956845   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.117233   26315 request.go:632] Waited for 160.320722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:01:10.117300   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:01:10.117306   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.117318   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.117324   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.120940   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.317057   26315 request.go:632] Waited for 195.415809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:10.317127   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:10.317135   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.317156   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.317180   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.320648   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.321373   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:10.321392   26315 pod_ready.go:82] duration metric: took 364.537566ms for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.321402   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.517507   26315 request.go:632] Waited for 196.023112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:01:10.517576   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:01:10.517583   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.517594   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.517601   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.521299   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.717299   26315 request.go:632] Waited for 195.382491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:10.717366   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:10.717372   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.717379   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.717384   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.720883   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.721468   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:10.721488   26315 pod_ready.go:82] duration metric: took 400.07752ms for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.721497   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.917490   26315 request.go:632] Waited for 195.929177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:01:10.917554   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:01:10.917574   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.917606   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.917617   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.921610   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.116693   26315 request.go:632] Waited for 194.297174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.116753   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.116759   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.116766   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.116769   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.120537   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.121044   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:11.121062   26315 pod_ready.go:82] duration metric: took 399.55959ms for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.121074   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.317266   26315 request.go:632] Waited for 196.133826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:01:11.317335   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:01:11.317342   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.317351   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.317358   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.321265   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.517020   26315 request.go:632] Waited for 195.154322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:11.517082   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:11.517089   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.517098   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.517103   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.520779   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.521296   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:11.521319   26315 pod_ready.go:82] duration metric: took 400.238082ms for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.521335   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.716800   26315 request.go:632] Waited for 195.390285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:01:11.716888   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:01:11.716896   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.716906   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.716911   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.720246   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.917422   26315 request.go:632] Waited for 196.372605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.917500   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.917508   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.917518   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.917526   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.921353   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.921887   26315 pod_ready.go:93] pod "kube-proxy-6gnt4" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:11.921912   26315 pod_ready.go:82] duration metric: took 400.568991ms for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.921925   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.116927   26315 request.go:632] Waited for 194.932043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:01:12.117009   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:01:12.117015   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.117022   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.117026   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.121372   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:12.317480   26315 request.go:632] Waited for 195.395103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:12.317541   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:12.317546   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.317553   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.317556   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.321223   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:12.321777   26315 pod_ready.go:93] pod "kube-proxy-vptrg" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:12.321796   26315 pod_ready.go:82] duration metric: took 399.864157ms for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.321806   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.516927   26315 request.go:632] Waited for 195.058252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:01:12.517009   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:01:12.517015   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.517022   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.517029   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.520681   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:12.717635   26315 request.go:632] Waited for 196.390201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:12.717694   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:12.717698   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.717706   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.717714   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.721311   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:12.721886   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:12.721903   26315 pod_ready.go:82] duration metric: took 400.091381ms for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.721913   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.917094   26315 request.go:632] Waited for 195.106579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:01:12.917184   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:01:12.917193   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.917203   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.917212   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.921090   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:13.117142   26315 request.go:632] Waited for 195.345819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:13.117216   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:13.117221   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.117229   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.117232   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.120777   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:13.121215   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:13.121232   26315 pod_ready.go:82] duration metric: took 399.313081ms for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:13.121242   26315 pod_ready.go:39] duration metric: took 3.200834368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:01:13.121266   26315 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:01:13.121324   26315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:01:13.137767   26315 api_server.go:72] duration metric: took 22.035280113s to wait for apiserver process to appear ...
	I0930 20:01:13.137797   26315 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:01:13.137828   26315 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0930 20:01:13.141994   26315 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I0930 20:01:13.142067   26315 round_trippers.go:463] GET https://192.168.39.3:8443/version
	I0930 20:01:13.142074   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.142082   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.142090   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.142859   26315 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 20:01:13.142975   26315 api_server.go:141] control plane version: v1.31.1
	I0930 20:01:13.142993   26315 api_server.go:131] duration metric: took 5.190596ms to wait for apiserver health ...
	I0930 20:01:13.143001   26315 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:01:13.317422   26315 request.go:632] Waited for 174.359049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.317472   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.317478   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.317484   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.317488   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.321962   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:13.326370   26315 system_pods.go:59] 17 kube-system pods found
	I0930 20:01:13.326406   26315 system_pods.go:61] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:01:13.326411   26315 system_pods.go:61] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:01:13.326415   26315 system_pods.go:61] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:01:13.326420   26315 system_pods.go:61] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:01:13.326425   26315 system_pods.go:61] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:01:13.326429   26315 system_pods.go:61] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:01:13.326432   26315 system_pods.go:61] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:01:13.326435   26315 system_pods.go:61] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:01:13.326438   26315 system_pods.go:61] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:01:13.326442   26315 system_pods.go:61] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:01:13.326445   26315 system_pods.go:61] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:01:13.326448   26315 system_pods.go:61] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:01:13.326451   26315 system_pods.go:61] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:01:13.326454   26315 system_pods.go:61] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:01:13.326457   26315 system_pods.go:61] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:01:13.326459   26315 system_pods.go:61] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:01:13.326462   26315 system_pods.go:61] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:01:13.326467   26315 system_pods.go:74] duration metric: took 183.46129ms to wait for pod list to return data ...
	I0930 20:01:13.326477   26315 default_sa.go:34] waiting for default service account to be created ...
	I0930 20:01:13.516843   26315 request.go:632] Waited for 190.295336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:01:13.516914   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:01:13.516919   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.516926   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.516929   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.520919   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:13.521167   26315 default_sa.go:45] found service account: "default"
	I0930 20:01:13.521184   26315 default_sa.go:55] duration metric: took 194.701824ms for default service account to be created ...
	I0930 20:01:13.521193   26315 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 20:01:13.717380   26315 request.go:632] Waited for 196.119354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.717451   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.717458   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.717467   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.717471   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.722690   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:01:13.727139   26315 system_pods.go:86] 17 kube-system pods found
	I0930 20:01:13.727168   26315 system_pods.go:89] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:01:13.727174   26315 system_pods.go:89] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:01:13.727179   26315 system_pods.go:89] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:01:13.727184   26315 system_pods.go:89] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:01:13.727188   26315 system_pods.go:89] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:01:13.727193   26315 system_pods.go:89] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:01:13.727198   26315 system_pods.go:89] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:01:13.727204   26315 system_pods.go:89] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:01:13.727209   26315 system_pods.go:89] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:01:13.727217   26315 system_pods.go:89] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:01:13.727230   26315 system_pods.go:89] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:01:13.727235   26315 system_pods.go:89] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:01:13.727241   26315 system_pods.go:89] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:01:13.727247   26315 system_pods.go:89] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:01:13.727252   26315 system_pods.go:89] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:01:13.727257   26315 system_pods.go:89] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:01:13.727261   26315 system_pods.go:89] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:01:13.727270   26315 system_pods.go:126] duration metric: took 206.072644ms to wait for k8s-apps to be running ...
	I0930 20:01:13.727277   26315 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 20:01:13.727327   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:01:13.741981   26315 system_svc.go:56] duration metric: took 14.693769ms WaitForService to wait for kubelet
	I0930 20:01:13.742010   26315 kubeadm.go:582] duration metric: took 22.639532003s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:01:13.742027   26315 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:01:13.917345   26315 request.go:632] Waited for 175.232926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes
	I0930 20:01:13.917397   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I0930 20:01:13.917402   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.917410   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.917413   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.921853   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:13.922642   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:01:13.922674   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:01:13.922690   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:01:13.922694   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:01:13.922699   26315 node_conditions.go:105] duration metric: took 180.667513ms to run NodePressure ...
	I0930 20:01:13.922708   26315 start.go:241] waiting for startup goroutines ...
	I0930 20:01:13.922733   26315 start.go:255] writing updated cluster config ...
	I0930 20:01:13.925048   26315 out.go:201] 
	I0930 20:01:13.926843   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:01:13.926954   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:01:13.928893   26315 out.go:177] * Starting "ha-805293-m03" control-plane node in "ha-805293" cluster
	I0930 20:01:13.930308   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:01:13.930336   26315 cache.go:56] Caching tarball of preloaded images
	I0930 20:01:13.930467   26315 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:01:13.930485   26315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:01:13.930582   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:01:13.930765   26315 start.go:360] acquireMachinesLock for ha-805293-m03: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:01:13.930817   26315 start.go:364] duration metric: took 28.082µs to acquireMachinesLock for "ha-805293-m03"
	I0930 20:01:13.930836   26315 start.go:93] Provisioning new machine with config: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:01:13.930923   26315 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0930 20:01:13.932766   26315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 20:01:13.932890   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:13.932929   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:13.949248   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
	I0930 20:01:13.949763   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:13.950280   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:13.950304   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:13.950634   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:13.950970   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:13.951189   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:13.951448   26315 start.go:159] libmachine.API.Create for "ha-805293" (driver="kvm2")
	I0930 20:01:13.951489   26315 client.go:168] LocalClient.Create starting
	I0930 20:01:13.951565   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 20:01:13.951611   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:01:13.951631   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:01:13.951696   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 20:01:13.951724   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:01:13.951742   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:01:13.951770   26315 main.go:141] libmachine: Running pre-create checks...
	I0930 20:01:13.951780   26315 main.go:141] libmachine: (ha-805293-m03) Calling .PreCreateCheck
	I0930 20:01:13.951958   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetConfigRaw
	I0930 20:01:13.952389   26315 main.go:141] libmachine: Creating machine...
	I0930 20:01:13.952404   26315 main.go:141] libmachine: (ha-805293-m03) Calling .Create
	I0930 20:01:13.952539   26315 main.go:141] libmachine: (ha-805293-m03) Creating KVM machine...
	I0930 20:01:13.953896   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found existing default KVM network
	I0930 20:01:13.954082   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found existing private KVM network mk-ha-805293
	I0930 20:01:13.954276   26315 main.go:141] libmachine: (ha-805293-m03) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03 ...
	I0930 20:01:13.954303   26315 main.go:141] libmachine: (ha-805293-m03) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 20:01:13.954425   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:13.954267   27054 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:01:13.954521   26315 main.go:141] libmachine: (ha-805293-m03) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 20:01:14.186819   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:14.186689   27054 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa...
	I0930 20:01:14.467265   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:14.467127   27054 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/ha-805293-m03.rawdisk...
	I0930 20:01:14.467311   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Writing magic tar header
	I0930 20:01:14.467327   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Writing SSH key tar header
	I0930 20:01:14.467340   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:14.467280   27054 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03 ...
	I0930 20:01:14.467434   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03
	I0930 20:01:14.467495   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03 (perms=drwx------)
	I0930 20:01:14.467509   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 20:01:14.467520   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:01:14.467545   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 20:01:14.467563   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 20:01:14.467577   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 20:01:14.467590   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 20:01:14.467603   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins
	I0930 20:01:14.467614   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home
	I0930 20:01:14.467622   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Skipping /home - not owner
	I0930 20:01:14.467636   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 20:01:14.467659   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 20:01:14.467677   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 20:01:14.467702   26315 main.go:141] libmachine: (ha-805293-m03) Creating domain...
	I0930 20:01:14.468847   26315 main.go:141] libmachine: (ha-805293-m03) define libvirt domain using xml: 
	I0930 20:01:14.468871   26315 main.go:141] libmachine: (ha-805293-m03) <domain type='kvm'>
	I0930 20:01:14.468881   26315 main.go:141] libmachine: (ha-805293-m03)   <name>ha-805293-m03</name>
	I0930 20:01:14.468899   26315 main.go:141] libmachine: (ha-805293-m03)   <memory unit='MiB'>2200</memory>
	I0930 20:01:14.468932   26315 main.go:141] libmachine: (ha-805293-m03)   <vcpu>2</vcpu>
	I0930 20:01:14.468950   26315 main.go:141] libmachine: (ha-805293-m03)   <features>
	I0930 20:01:14.468968   26315 main.go:141] libmachine: (ha-805293-m03)     <acpi/>
	I0930 20:01:14.468978   26315 main.go:141] libmachine: (ha-805293-m03)     <apic/>
	I0930 20:01:14.469001   26315 main.go:141] libmachine: (ha-805293-m03)     <pae/>
	I0930 20:01:14.469014   26315 main.go:141] libmachine: (ha-805293-m03)     
	I0930 20:01:14.469041   26315 main.go:141] libmachine: (ha-805293-m03)   </features>
	I0930 20:01:14.469062   26315 main.go:141] libmachine: (ha-805293-m03)   <cpu mode='host-passthrough'>
	I0930 20:01:14.469074   26315 main.go:141] libmachine: (ha-805293-m03)   
	I0930 20:01:14.469080   26315 main.go:141] libmachine: (ha-805293-m03)   </cpu>
	I0930 20:01:14.469091   26315 main.go:141] libmachine: (ha-805293-m03)   <os>
	I0930 20:01:14.469107   26315 main.go:141] libmachine: (ha-805293-m03)     <type>hvm</type>
	I0930 20:01:14.469115   26315 main.go:141] libmachine: (ha-805293-m03)     <boot dev='cdrom'/>
	I0930 20:01:14.469124   26315 main.go:141] libmachine: (ha-805293-m03)     <boot dev='hd'/>
	I0930 20:01:14.469143   26315 main.go:141] libmachine: (ha-805293-m03)     <bootmenu enable='no'/>
	I0930 20:01:14.469154   26315 main.go:141] libmachine: (ha-805293-m03)   </os>
	I0930 20:01:14.469164   26315 main.go:141] libmachine: (ha-805293-m03)   <devices>
	I0930 20:01:14.469248   26315 main.go:141] libmachine: (ha-805293-m03)     <disk type='file' device='cdrom'>
	I0930 20:01:14.469284   26315 main.go:141] libmachine: (ha-805293-m03)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/boot2docker.iso'/>
	I0930 20:01:14.469299   26315 main.go:141] libmachine: (ha-805293-m03)       <target dev='hdc' bus='scsi'/>
	I0930 20:01:14.469305   26315 main.go:141] libmachine: (ha-805293-m03)       <readonly/>
	I0930 20:01:14.469314   26315 main.go:141] libmachine: (ha-805293-m03)     </disk>
	I0930 20:01:14.469321   26315 main.go:141] libmachine: (ha-805293-m03)     <disk type='file' device='disk'>
	I0930 20:01:14.469350   26315 main.go:141] libmachine: (ha-805293-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 20:01:14.469366   26315 main.go:141] libmachine: (ha-805293-m03)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/ha-805293-m03.rawdisk'/>
	I0930 20:01:14.469381   26315 main.go:141] libmachine: (ha-805293-m03)       <target dev='hda' bus='virtio'/>
	I0930 20:01:14.469387   26315 main.go:141] libmachine: (ha-805293-m03)     </disk>
	I0930 20:01:14.469400   26315 main.go:141] libmachine: (ha-805293-m03)     <interface type='network'>
	I0930 20:01:14.469410   26315 main.go:141] libmachine: (ha-805293-m03)       <source network='mk-ha-805293'/>
	I0930 20:01:14.469421   26315 main.go:141] libmachine: (ha-805293-m03)       <model type='virtio'/>
	I0930 20:01:14.469427   26315 main.go:141] libmachine: (ha-805293-m03)     </interface>
	I0930 20:01:14.469437   26315 main.go:141] libmachine: (ha-805293-m03)     <interface type='network'>
	I0930 20:01:14.469456   26315 main.go:141] libmachine: (ha-805293-m03)       <source network='default'/>
	I0930 20:01:14.469482   26315 main.go:141] libmachine: (ha-805293-m03)       <model type='virtio'/>
	I0930 20:01:14.469512   26315 main.go:141] libmachine: (ha-805293-m03)     </interface>
	I0930 20:01:14.469521   26315 main.go:141] libmachine: (ha-805293-m03)     <serial type='pty'>
	I0930 20:01:14.469540   26315 main.go:141] libmachine: (ha-805293-m03)       <target port='0'/>
	I0930 20:01:14.469572   26315 main.go:141] libmachine: (ha-805293-m03)     </serial>
	I0930 20:01:14.469589   26315 main.go:141] libmachine: (ha-805293-m03)     <console type='pty'>
	I0930 20:01:14.469603   26315 main.go:141] libmachine: (ha-805293-m03)       <target type='serial' port='0'/>
	I0930 20:01:14.469614   26315 main.go:141] libmachine: (ha-805293-m03)     </console>
	I0930 20:01:14.469623   26315 main.go:141] libmachine: (ha-805293-m03)     <rng model='virtio'>
	I0930 20:01:14.469631   26315 main.go:141] libmachine: (ha-805293-m03)       <backend model='random'>/dev/random</backend>
	I0930 20:01:14.469642   26315 main.go:141] libmachine: (ha-805293-m03)     </rng>
	I0930 20:01:14.469648   26315 main.go:141] libmachine: (ha-805293-m03)     
	I0930 20:01:14.469658   26315 main.go:141] libmachine: (ha-805293-m03)     
	I0930 20:01:14.469664   26315 main.go:141] libmachine: (ha-805293-m03)   </devices>
	I0930 20:01:14.469672   26315 main.go:141] libmachine: (ha-805293-m03) </domain>
	I0930 20:01:14.469677   26315 main.go:141] libmachine: (ha-805293-m03) 
	I0930 20:01:14.476673   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:7e:5d:5f in network default
	I0930 20:01:14.477269   26315 main.go:141] libmachine: (ha-805293-m03) Ensuring networks are active...
	I0930 20:01:14.477295   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:14.478121   26315 main.go:141] libmachine: (ha-805293-m03) Ensuring network default is active
	I0930 20:01:14.478526   26315 main.go:141] libmachine: (ha-805293-m03) Ensuring network mk-ha-805293 is active
	I0930 20:01:14.478957   26315 main.go:141] libmachine: (ha-805293-m03) Getting domain xml...
	I0930 20:01:14.479718   26315 main.go:141] libmachine: (ha-805293-m03) Creating domain...
	I0930 20:01:15.747292   26315 main.go:141] libmachine: (ha-805293-m03) Waiting to get IP...
	I0930 20:01:15.748220   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:15.748679   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:15.748743   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:15.748666   27054 retry.go:31] will retry after 284.785124ms: waiting for machine to come up
	I0930 20:01:16.035256   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:16.035716   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:16.035831   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:16.035661   27054 retry.go:31] will retry after 335.488124ms: waiting for machine to come up
	I0930 20:01:16.373109   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:16.373683   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:16.373706   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:16.373645   27054 retry.go:31] will retry after 461.768045ms: waiting for machine to come up
	I0930 20:01:16.837400   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:16.837942   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:16.838002   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:16.837899   27054 retry.go:31] will retry after 451.939776ms: waiting for machine to come up
	I0930 20:01:17.291224   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:17.291638   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:17.291662   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:17.291600   27054 retry.go:31] will retry after 601.468058ms: waiting for machine to come up
	I0930 20:01:17.894045   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:17.894474   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:17.894502   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:17.894444   27054 retry.go:31] will retry after 685.014003ms: waiting for machine to come up
	I0930 20:01:18.581469   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:18.581905   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:18.581940   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:18.581886   27054 retry.go:31] will retry after 901.632295ms: waiting for machine to come up
	I0930 20:01:19.485606   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:19.486144   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:19.486174   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:19.486068   27054 retry.go:31] will retry after 1.002316049s: waiting for machine to come up
	I0930 20:01:20.489568   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:20.490064   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:20.490086   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:20.490017   27054 retry.go:31] will retry after 1.384559526s: waiting for machine to come up
	I0930 20:01:21.875542   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:21.875885   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:21.875904   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:21.875821   27054 retry.go:31] will retry after 1.560882287s: waiting for machine to come up
	I0930 20:01:23.438575   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:23.439019   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:23.439051   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:23.438971   27054 retry.go:31] will retry after 1.966635221s: waiting for machine to come up
	I0930 20:01:25.407626   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:25.408136   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:25.408170   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:25.408088   27054 retry.go:31] will retry after 2.861827785s: waiting for machine to come up
	I0930 20:01:28.272997   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:28.273395   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:28.273417   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:28.273357   27054 retry.go:31] will retry after 2.760760648s: waiting for machine to come up
	I0930 20:01:31.035244   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:31.035758   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:31.035806   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:31.035729   27054 retry.go:31] will retry after 3.889423891s: waiting for machine to come up
	I0930 20:01:34.927053   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:34.927650   26315 main.go:141] libmachine: (ha-805293-m03) Found IP for machine: 192.168.39.227
	I0930 20:01:34.927682   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has current primary IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:34.927690   26315 main.go:141] libmachine: (ha-805293-m03) Reserving static IP address...
	I0930 20:01:34.928071   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find host DHCP lease matching {name: "ha-805293-m03", mac: "52:54:00:ce:66:df", ip: "192.168.39.227"} in network mk-ha-805293
	I0930 20:01:35.005095   26315 main.go:141] libmachine: (ha-805293-m03) Reserved static IP address: 192.168.39.227
	I0930 20:01:35.005128   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Getting to WaitForSSH function...
	I0930 20:01:35.005135   26315 main.go:141] libmachine: (ha-805293-m03) Waiting for SSH to be available...
	I0930 20:01:35.007521   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.008053   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.008080   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.008244   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Using SSH client type: external
	I0930 20:01:35.008262   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa (-rw-------)
	I0930 20:01:35.008294   26315 main.go:141] libmachine: (ha-805293-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 20:01:35.008309   26315 main.go:141] libmachine: (ha-805293-m03) DBG | About to run SSH command:
	I0930 20:01:35.008328   26315 main.go:141] libmachine: (ha-805293-m03) DBG | exit 0
	I0930 20:01:35.131490   26315 main.go:141] libmachine: (ha-805293-m03) DBG | SSH cmd err, output: <nil>: 
	I0930 20:01:35.131786   26315 main.go:141] libmachine: (ha-805293-m03) KVM machine creation complete!
	I0930 20:01:35.132088   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetConfigRaw
	I0930 20:01:35.132882   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:35.133160   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:35.133330   26315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 20:01:35.133343   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetState
	I0930 20:01:35.134758   26315 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 20:01:35.134778   26315 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 20:01:35.134789   26315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 20:01:35.134797   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.137025   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.137368   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.137394   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.137501   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.137683   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.137839   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.137997   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.138162   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.138394   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.138405   26315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 20:01:35.238733   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:01:35.238763   26315 main.go:141] libmachine: Detecting the provisioner...
	I0930 20:01:35.238775   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.242022   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.242527   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.242562   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.242839   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.243050   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.243235   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.243427   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.243630   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.243832   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.243850   26315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 20:01:35.348183   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 20:01:35.348252   26315 main.go:141] libmachine: found compatible host: buildroot
	I0930 20:01:35.348261   26315 main.go:141] libmachine: Provisioning with buildroot...
	I0930 20:01:35.348268   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:35.348498   26315 buildroot.go:166] provisioning hostname "ha-805293-m03"
	I0930 20:01:35.348524   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:35.348749   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.351890   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.352398   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.352424   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.352577   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.352756   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.352894   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.353007   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.353167   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.353367   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.353384   26315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293-m03 && echo "ha-805293-m03" | sudo tee /etc/hostname
	I0930 20:01:35.473967   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293-m03
	
	I0930 20:01:35.473997   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.476729   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.477054   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.477085   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.477369   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.477567   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.477748   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.477907   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.478077   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.478253   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.478270   26315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:01:35.591650   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:01:35.591680   26315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:01:35.591697   26315 buildroot.go:174] setting up certificates
	I0930 20:01:35.591707   26315 provision.go:84] configureAuth start
	I0930 20:01:35.591715   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:35.591952   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:35.594901   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.595262   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.595286   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.595420   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.598100   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.598602   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.598626   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.598829   26315 provision.go:143] copyHostCerts
	I0930 20:01:35.598868   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:01:35.598917   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:01:35.598931   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:01:35.599012   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:01:35.599111   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:01:35.599134   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:01:35.599141   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:01:35.599179   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:01:35.599243   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:01:35.599270   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:01:35.599279   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:01:35.599331   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:01:35.599408   26315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293-m03 san=[127.0.0.1 192.168.39.227 ha-805293-m03 localhost minikube]
	I0930 20:01:35.796149   26315 provision.go:177] copyRemoteCerts
	I0930 20:01:35.796206   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:01:35.796242   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.798946   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.799340   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.799368   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.799648   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.799848   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.800023   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.800180   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:35.882427   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 20:01:35.882508   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:01:35.906794   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 20:01:35.906860   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 20:01:35.932049   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 20:01:35.932131   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 20:01:35.957426   26315 provision.go:87] duration metric: took 365.707269ms to configureAuth
	I0930 20:01:35.957459   26315 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:01:35.957679   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:01:35.957795   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.960499   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.960961   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.960996   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.961176   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.961403   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.961575   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.961765   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.961966   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.962139   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.962153   26315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:01:36.182253   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:01:36.182280   26315 main.go:141] libmachine: Checking connection to Docker...
	I0930 20:01:36.182288   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetURL
	I0930 20:01:36.183907   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Using libvirt version 6000000
	I0930 20:01:36.186215   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.186549   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.186590   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.186762   26315 main.go:141] libmachine: Docker is up and running!
	I0930 20:01:36.186776   26315 main.go:141] libmachine: Reticulating splines...
	I0930 20:01:36.186783   26315 client.go:171] duration metric: took 22.235285837s to LocalClient.Create
	I0930 20:01:36.186801   26315 start.go:167] duration metric: took 22.235357522s to libmachine.API.Create "ha-805293"
	I0930 20:01:36.186810   26315 start.go:293] postStartSetup for "ha-805293-m03" (driver="kvm2")
	I0930 20:01:36.186826   26315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:01:36.186842   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.187054   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:01:36.187077   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:36.189228   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.189551   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.189577   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.189754   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.189932   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.190098   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.190211   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:36.269942   26315 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:01:36.274174   26315 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:01:36.274204   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:01:36.274281   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:01:36.274373   26315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:01:36.274383   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 20:01:36.274490   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:01:36.284037   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:01:36.308961   26315 start.go:296] duration metric: took 122.135978ms for postStartSetup
	I0930 20:01:36.309010   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetConfigRaw
	I0930 20:01:36.309613   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:36.312777   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.313257   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.313307   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.313687   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:01:36.313894   26315 start.go:128] duration metric: took 22.382961104s to createHost
	I0930 20:01:36.313917   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:36.316229   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.316599   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.316627   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.316783   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.316957   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.317109   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.317219   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.317366   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:36.317526   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:36.317537   26315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:01:36.419858   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727726496.392744661
	
	I0930 20:01:36.419877   26315 fix.go:216] guest clock: 1727726496.392744661
	I0930 20:01:36.419884   26315 fix.go:229] Guest: 2024-09-30 20:01:36.392744661 +0000 UTC Remote: 2024-09-30 20:01:36.313905276 +0000 UTC m=+139.884995221 (delta=78.839385ms)
	I0930 20:01:36.419899   26315 fix.go:200] guest clock delta is within tolerance: 78.839385ms
	I0930 20:01:36.419904   26315 start.go:83] releasing machines lock for "ha-805293-m03", held for 22.489079696s
	I0930 20:01:36.419932   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.420201   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:36.422678   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.423024   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.423063   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.425360   26315 out.go:177] * Found network options:
	I0930 20:01:36.426711   26315 out.go:177]   - NO_PROXY=192.168.39.3,192.168.39.220
	W0930 20:01:36.427962   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 20:01:36.427990   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:01:36.428012   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.428657   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.428857   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.428967   26315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:01:36.429007   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	W0930 20:01:36.429092   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 20:01:36.429124   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:01:36.429190   26315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:01:36.429211   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:36.431941   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432202   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432300   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.432322   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432458   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.432598   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.432659   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.432683   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432755   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.432845   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.432915   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:36.432995   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.433083   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.433164   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:36.661994   26315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:01:36.669285   26315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:01:36.669354   26315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:01:36.686879   26315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 20:01:36.686911   26315 start.go:495] detecting cgroup driver to use...
	I0930 20:01:36.687008   26315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:01:36.703695   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:01:36.717831   26315 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:01:36.717898   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:01:36.732194   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:01:36.746205   26315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:01:36.873048   26315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:01:37.031067   26315 docker.go:233] disabling docker service ...
	I0930 20:01:37.031142   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:01:37.047034   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:01:37.059962   26315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:01:37.191501   26315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:01:37.302357   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:01:37.316910   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:01:37.336669   26315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:01:37.336739   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.347286   26315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:01:37.347361   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.357984   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.368059   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.379248   26315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:01:37.390460   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.401206   26315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.418758   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.428841   26315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:01:37.438255   26315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 20:01:37.438328   26315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 20:01:37.451070   26315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:01:37.460818   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:01:37.578097   26315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:01:37.670992   26315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:01:37.671072   26315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:01:37.675792   26315 start.go:563] Will wait 60s for crictl version
	I0930 20:01:37.675847   26315 ssh_runner.go:195] Run: which crictl
	I0930 20:01:37.679190   26315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:01:37.718042   26315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:01:37.718121   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:01:37.745873   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:01:37.774031   26315 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:01:37.775415   26315 out.go:177]   - env NO_PROXY=192.168.39.3
	I0930 20:01:37.776644   26315 out.go:177]   - env NO_PROXY=192.168.39.3,192.168.39.220
	I0930 20:01:37.777763   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:37.780596   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:37.780948   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:37.780970   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:37.781145   26315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:01:37.785213   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:01:37.797526   26315 mustload.go:65] Loading cluster: ha-805293
	I0930 20:01:37.797767   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:01:37.798120   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:37.798167   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:37.813162   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46385
	I0930 20:01:37.813567   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:37.814037   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:37.814052   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:37.814397   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:37.814604   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:01:37.816041   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:01:37.816336   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:37.816371   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:37.831585   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37645
	I0930 20:01:37.832045   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:37.832532   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:37.832557   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:37.832860   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:37.833026   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:01:37.833192   26315 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.227
	I0930 20:01:37.833209   26315 certs.go:194] generating shared ca certs ...
	I0930 20:01:37.833229   26315 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:01:37.833416   26315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:01:37.833471   26315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:01:37.833484   26315 certs.go:256] generating profile certs ...
	I0930 20:01:37.833587   26315 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 20:01:37.833619   26315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55
	I0930 20:01:37.833638   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.220 192.168.39.227 192.168.39.254]
	I0930 20:01:38.116566   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55 ...
	I0930 20:01:38.116596   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55: {Name:mkc0cd033bb8a494a4cf8a08dfd67f55b67932e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:01:38.116763   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55 ...
	I0930 20:01:38.116776   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55: {Name:mk85317566d0a2f89680d96c44f0e865cd88a3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:01:38.116847   26315 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 20:01:38.116983   26315 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 20:01:38.117102   26315 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 20:01:38.117117   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 20:01:38.117131   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 20:01:38.117145   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 20:01:38.117158   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 20:01:38.117175   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 20:01:38.117187   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 20:01:38.117198   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 20:01:38.131699   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 20:01:38.131811   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:01:38.131856   26315 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:01:38.131870   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:01:38.131902   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:01:38.131926   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:01:38.131956   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:01:38.132010   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:01:38.132045   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.132066   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.132084   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.132129   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:01:38.135411   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:38.135848   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:01:38.135875   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:38.136103   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:01:38.136307   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:01:38.136477   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:01:38.136602   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:01:38.215899   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 20:01:38.221340   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 20:01:38.232045   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 20:01:38.236011   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 20:01:38.247009   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 20:01:38.250999   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 20:01:38.261524   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 20:01:38.265766   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0930 20:01:38.275973   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 20:01:38.279940   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 20:01:38.289617   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 20:01:38.293330   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0930 20:01:38.303037   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:01:38.328067   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:01:38.353124   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:01:38.377109   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:01:38.402737   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 20:01:38.432128   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 20:01:38.459728   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:01:38.484047   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 20:01:38.508033   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:01:38.530855   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:01:38.554688   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:01:38.579730   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 20:01:38.595907   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 20:01:38.611657   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 20:01:38.627976   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0930 20:01:38.644290   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 20:01:38.662490   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0930 20:01:38.678795   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 20:01:38.694165   26315 ssh_runner.go:195] Run: openssl version
	I0930 20:01:38.699696   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:01:38.709850   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.714078   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.714128   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.719944   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:01:38.730979   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:01:38.741564   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.746132   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.746193   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.751872   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:01:38.763738   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:01:38.775831   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.780819   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.780877   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.786554   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:01:38.797347   26315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:01:38.801341   26315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 20:01:38.801400   26315 kubeadm.go:934] updating node {m03 192.168.39.227 8443 v1.31.1 crio true true} ...
	I0930 20:01:38.801503   26315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:01:38.801529   26315 kube-vip.go:115] generating kube-vip config ...
	I0930 20:01:38.801578   26315 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 20:01:38.819903   26315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 20:01:38.819976   26315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 20:01:38.820036   26315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:01:38.830324   26315 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 20:01:38.830375   26315 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 20:01:38.842272   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0930 20:01:38.842334   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:01:38.842272   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 20:01:38.842272   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0930 20:01:38.842419   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:01:38.842439   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:01:38.842489   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:01:38.842540   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:01:38.861520   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 20:01:38.861559   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 20:01:38.861581   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:01:38.861631   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 20:01:38.861657   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 20:01:38.861689   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:01:38.875651   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 20:01:38.875695   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 20:01:39.808722   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 20:01:39.819615   26315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 20:01:39.836414   26315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:01:39.853331   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 20:01:39.869585   26315 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 20:01:39.873243   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:01:39.884957   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:01:40.006850   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:01:40.022775   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:01:40.023225   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:40.023284   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:40.040829   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0930 20:01:40.041301   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:40.041861   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:40.041890   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:40.042247   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:40.042469   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:01:40.042649   26315 start.go:317] joinCluster: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:01:40.042812   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 20:01:40.042834   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:01:40.046258   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:40.046800   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:01:40.046821   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:40.047017   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:01:40.047286   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:01:40.047660   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:01:40.047833   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:01:40.209323   26315 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:01:40.209377   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1eegwc.d3x1pf4onbzzskk3 --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m03 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443"
	I0930 20:02:03.693864   26315 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1eegwc.d3x1pf4onbzzskk3 --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m03 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443": (23.484455167s)
	I0930 20:02:03.693901   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 20:02:04.227863   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-805293-m03 minikube.k8s.io/updated_at=2024_09_30T20_02_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=ha-805293 minikube.k8s.io/primary=false
	I0930 20:02:04.356839   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-805293-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 20:02:04.460804   26315 start.go:319] duration metric: took 24.418151981s to joinCluster
	I0930 20:02:04.460890   26315 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:02:04.461213   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:02:04.462900   26315 out.go:177] * Verifying Kubernetes components...
	I0930 20:02:04.464457   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:02:04.710029   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:02:04.776170   26315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:02:04.776405   26315 kapi.go:59] client config for ha-805293: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 20:02:04.776460   26315 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.3:8443
	I0930 20:02:04.776741   26315 node_ready.go:35] waiting up to 6m0s for node "ha-805293-m03" to be "Ready" ...
	I0930 20:02:04.776826   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:04.776836   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:04.776843   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:04.776849   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:04.780756   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:05.277289   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:05.277316   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:05.277328   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:05.277336   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:05.280839   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:05.777768   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:05.777793   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:05.777802   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:05.777810   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:05.781540   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:06.277679   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:06.277703   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:06.277713   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:06.277719   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:06.281145   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:06.777911   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:06.777937   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:06.777949   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:06.777955   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:06.781669   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:06.782486   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:07.277405   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:07.277428   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:07.277435   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:07.277438   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:07.281074   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:07.776952   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:07.776984   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:07.777005   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:07.777010   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:07.780689   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:08.277555   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:08.277576   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:08.277583   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:08.277587   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:08.283539   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:02:08.777360   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:08.777381   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:08.777390   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:08.777394   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:08.780937   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:09.277721   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:09.277758   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:09.277768   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:09.277772   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:09.285233   26315 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 20:02:09.285662   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:09.776955   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:09.776977   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:09.776987   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:09.776992   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:09.781593   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:10.277015   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:10.277033   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:10.277045   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:10.277049   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:10.281851   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:10.777471   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:10.777502   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:10.777513   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:10.777518   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:10.780948   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:11.277959   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:11.277977   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:11.277985   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:11.277989   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:11.401106   26315 round_trippers.go:574] Response Status: 200 OK in 123 milliseconds
	I0930 20:02:11.401822   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:11.777418   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:11.777439   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:11.777447   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:11.777451   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:11.780577   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:12.277563   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:12.277586   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:12.277594   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:12.277600   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:12.280508   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:12.777614   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:12.777635   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:12.777644   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:12.777649   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:12.780589   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:13.277609   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:13.277647   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:13.277658   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:13.277664   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:13.280727   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:13.777657   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:13.777684   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:13.777692   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:13.777699   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:13.781417   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:13.781894   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:14.277640   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:14.277665   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:14.277674   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:14.277678   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:14.281731   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:14.777599   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:14.777622   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:14.777633   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:14.777638   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:14.780768   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:15.277270   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:15.277293   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:15.277302   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:15.277308   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:15.281504   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:15.777339   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:15.777363   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:15.777374   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:15.777380   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:15.780737   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:16.277475   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:16.277500   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:16.277508   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:16.277513   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:16.281323   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:16.281879   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:16.777003   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:16.777026   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:16.777033   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:16.777038   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:16.780794   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:17.277324   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:17.277345   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:17.277353   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:17.277362   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:17.281320   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:17.777286   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:17.777313   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:17.777323   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:17.777329   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:17.781420   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:18.277338   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:18.277361   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:18.277369   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:18.277374   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:18.280798   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:18.777933   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:18.777955   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:18.777963   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:18.777967   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:18.781895   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:18.782295   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:19.277039   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:19.277062   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:19.277070   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:19.277074   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:19.280872   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:19.776906   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:19.776931   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:19.776941   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:19.776945   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:19.789070   26315 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0930 20:02:20.277619   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:20.277645   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:20.277657   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:20.277664   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:20.281050   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:20.777108   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:20.777132   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:20.777140   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:20.777145   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:20.780896   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:21.277715   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:21.277737   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:21.277746   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:21.277750   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:21.281198   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:21.281766   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:21.777774   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:21.777798   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:21.777812   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:21.777818   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:21.781858   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:22.277699   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:22.277726   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.277737   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.277741   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.281520   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.777562   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:22.777588   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.777599   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.777606   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.781172   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.781900   26315 node_ready.go:49] node "ha-805293-m03" has status "Ready":"True"
	I0930 20:02:22.781919   26315 node_ready.go:38] duration metric: took 18.00516261s for node "ha-805293-m03" to be "Ready" ...
	I0930 20:02:22.781930   26315 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:02:22.782018   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:22.782034   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.782045   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.782050   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.788078   26315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 20:02:22.794707   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.794792   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-x7zjp
	I0930 20:02:22.794802   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.794843   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.794851   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.798283   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.799034   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:22.799049   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.799059   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.799063   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.802512   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.803017   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.803034   26315 pod_ready.go:82] duration metric: took 8.303758ms for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.803043   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.803100   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-z4bkv
	I0930 20:02:22.803108   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.803115   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.803120   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.805708   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.806288   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:22.806303   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.806309   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.806314   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.808794   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.809193   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.809210   26315 pod_ready.go:82] duration metric: took 6.159698ms for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.809221   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.809280   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293
	I0930 20:02:22.809291   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.809302   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.809310   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.811844   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.812420   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:22.812435   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.812441   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.812443   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.814572   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.815425   26315 pod_ready.go:93] pod "etcd-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.815446   26315 pod_ready.go:82] duration metric: took 6.21739ms for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.815467   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.815571   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m02
	I0930 20:02:22.815579   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.815589   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.815596   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.819297   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.820054   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:22.820071   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.820078   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.820082   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.822946   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.823362   26315 pod_ready.go:93] pod "etcd-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.823377   26315 pod_ready.go:82] duration metric: took 7.903457ms for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.823386   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.977860   26315 request.go:632] Waited for 154.412889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m03
	I0930 20:02:22.977929   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m03
	I0930 20:02:22.977936   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.977947   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.977956   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.981875   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.177702   26315 request.go:632] Waited for 195.197886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:23.177761   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:23.177766   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.177774   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.177779   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.180898   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.181332   26315 pod_ready.go:93] pod "etcd-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:23.181350   26315 pod_ready.go:82] duration metric: took 357.955948ms for pod "etcd-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.181366   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.377609   26315 request.go:632] Waited for 196.161944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:02:23.377673   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:02:23.377681   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.377691   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.377697   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.381213   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.578424   26315 request.go:632] Waited for 196.368077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:23.578500   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:23.578506   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.578514   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.578528   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.581799   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.582390   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:23.582406   26315 pod_ready.go:82] duration metric: took 401.034594ms for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.582416   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.778543   26315 request.go:632] Waited for 196.052617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:02:23.778624   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:02:23.778633   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.778643   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.778653   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.781828   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.977855   26315 request.go:632] Waited for 195.382083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:23.977924   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:23.977944   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.977959   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.977965   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.981372   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.982066   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:23.982087   26315 pod_ready.go:82] duration metric: took 399.664005ms for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.982100   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.178123   26315 request.go:632] Waited for 195.960731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m03
	I0930 20:02:24.178196   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m03
	I0930 20:02:24.178203   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.178211   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.178236   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.182112   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:24.378558   26315 request.go:632] Waited for 195.433009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:24.378638   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:24.378643   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.378650   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.378656   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.382291   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:24.382917   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:24.382938   26315 pod_ready.go:82] duration metric: took 400.829354ms for pod "kube-apiserver-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.382948   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.577887   26315 request.go:632] Waited for 194.863294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:02:24.577956   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:02:24.577963   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.577971   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.577978   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.581564   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:24.778150   26315 request.go:632] Waited for 195.36459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:24.778203   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:24.778208   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.778216   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.778221   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.781210   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:24.781808   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:24.781826   26315 pod_ready.go:82] duration metric: took 398.871488ms for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.781839   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.977967   26315 request.go:632] Waited for 196.028192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:02:24.978039   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:02:24.978046   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.978055   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.978062   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.981635   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.177628   26315 request.go:632] Waited for 195.118197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:25.177702   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:25.177707   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.177715   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.177722   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.184032   26315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 20:02:25.185117   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:25.185151   26315 pod_ready.go:82] duration metric: took 403.303748ms for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.185168   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.378088   26315 request.go:632] Waited for 192.829504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m03
	I0930 20:02:25.378247   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m03
	I0930 20:02:25.378262   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.378274   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.378284   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.382197   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.578183   26315 request.go:632] Waited for 195.374549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:25.578237   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:25.578241   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.578249   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.578273   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.581302   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.581967   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:25.581990   26315 pod_ready.go:82] duration metric: took 396.812632ms for pod "kube-controller-manager-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.582004   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.778066   26315 request.go:632] Waited for 195.961131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:02:25.778120   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:02:25.778125   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.778132   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.778136   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.781487   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.977671   26315 request.go:632] Waited for 195.30691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:25.977755   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:25.977762   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.977769   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.977775   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.981674   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.982338   26315 pod_ready.go:93] pod "kube-proxy-6gnt4" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:25.982360   26315 pod_ready.go:82] duration metric: took 400.349266ms for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.982370   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b9cpp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.178400   26315 request.go:632] Waited for 195.958284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b9cpp
	I0930 20:02:26.178455   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b9cpp
	I0930 20:02:26.178460   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.178468   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.178474   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.181740   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.377643   26315 request.go:632] Waited for 195.301602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:26.377715   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:26.377720   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.377730   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.377736   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.381534   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.382336   26315 pod_ready.go:93] pod "kube-proxy-b9cpp" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:26.382356   26315 pod_ready.go:82] duration metric: took 399.97947ms for pod "kube-proxy-b9cpp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.382369   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.578135   26315 request.go:632] Waited for 195.696435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:02:26.578222   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:02:26.578231   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.578239   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.578246   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.581969   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.778092   26315 request.go:632] Waited for 195.270119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:26.778175   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:26.778183   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.778194   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.778204   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.781951   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.782497   26315 pod_ready.go:93] pod "kube-proxy-vptrg" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:26.782530   26315 pod_ready.go:82] duration metric: took 400.140578ms for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.782542   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.978290   26315 request.go:632] Waited for 195.637761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:02:26.978361   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:02:26.978368   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.978377   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.978381   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.982459   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:27.178413   26315 request.go:632] Waited for 195.235139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:27.178464   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:27.178469   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.178476   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.178479   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.182089   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.182674   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:27.182695   26315 pod_ready.go:82] duration metric: took 400.147259ms for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.182706   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.377673   26315 request.go:632] Waited for 194.89364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:02:27.377752   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:02:27.377758   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.377765   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.377769   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.381356   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.578554   26315 request.go:632] Waited for 196.443432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:27.578622   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:27.578630   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.578641   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.578647   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.582325   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.582942   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:27.582965   26315 pod_ready.go:82] duration metric: took 400.251961ms for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.582978   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.778055   26315 request.go:632] Waited for 195.008545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m03
	I0930 20:02:27.778129   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m03
	I0930 20:02:27.778135   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.778142   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.778147   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.782023   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.977660   26315 request.go:632] Waited for 194.950522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:27.977742   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:27.977752   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.977762   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.977769   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.981329   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.981878   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:27.981905   26315 pod_ready.go:82] duration metric: took 398.919132ms for pod "kube-scheduler-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.981920   26315 pod_ready.go:39] duration metric: took 5.199971217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:02:27.981939   26315 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:02:27.982009   26315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:02:27.999589   26315 api_server.go:72] duration metric: took 23.538667198s to wait for apiserver process to appear ...
	I0930 20:02:27.999616   26315 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:02:27.999635   26315 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0930 20:02:28.006690   26315 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I0930 20:02:28.006768   26315 round_trippers.go:463] GET https://192.168.39.3:8443/version
	I0930 20:02:28.006788   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.006799   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.006804   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.008072   26315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0930 20:02:28.008144   26315 api_server.go:141] control plane version: v1.31.1
	I0930 20:02:28.008163   26315 api_server.go:131] duration metric: took 8.540356ms to wait for apiserver health ...
	I0930 20:02:28.008173   26315 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:02:28.178582   26315 request.go:632] Waited for 170.336703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.178653   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.178673   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.178683   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.178688   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.186196   26315 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 20:02:28.192615   26315 system_pods.go:59] 24 kube-system pods found
	I0930 20:02:28.192646   26315 system_pods.go:61] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:02:28.192651   26315 system_pods.go:61] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:02:28.192656   26315 system_pods.go:61] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:02:28.192661   26315 system_pods.go:61] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:02:28.192665   26315 system_pods.go:61] "etcd-ha-805293-m03" [c87078d8-ee99-4a5f-9258-cf5d7e658388] Running
	I0930 20:02:28.192668   26315 system_pods.go:61] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:02:28.192671   26315 system_pods.go:61] "kindnet-qrhb8" [852c4080-9210-47bb-a06a-d1b8bcff580d] Running
	I0930 20:02:28.192675   26315 system_pods.go:61] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:02:28.192679   26315 system_pods.go:61] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:02:28.192682   26315 system_pods.go:61] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:02:28.192687   26315 system_pods.go:61] "kube-apiserver-ha-805293-m03" [6fb5a285-7f35-4eb2-b028-6bd9fcfd21fe] Running
	I0930 20:02:28.192691   26315 system_pods.go:61] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:02:28.192695   26315 system_pods.go:61] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:02:28.192698   26315 system_pods.go:61] "kube-controller-manager-ha-805293-m03" [35d67e4a-f434-49df-8fb9-c6fcc725d8ff] Running
	I0930 20:02:28.192702   26315 system_pods.go:61] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:02:28.192706   26315 system_pods.go:61] "kube-proxy-b9cpp" [c828ff6a-6cbb-4a29-84bc-118522687da8] Running
	I0930 20:02:28.192710   26315 system_pods.go:61] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:02:28.192714   26315 system_pods.go:61] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:02:28.192720   26315 system_pods.go:61] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:02:28.192723   26315 system_pods.go:61] "kube-scheduler-ha-805293-m03" [34e2edf8-ca25-4a7c-a626-ac037b40b905] Running
	I0930 20:02:28.192729   26315 system_pods.go:61] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:02:28.192732   26315 system_pods.go:61] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:02:28.192735   26315 system_pods.go:61] "kube-vip-ha-805293-m03" [fcc5a165-5430-45d3-8ec7-fbdf5adc7e20] Running
	I0930 20:02:28.192738   26315 system_pods.go:61] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:02:28.192747   26315 system_pods.go:74] duration metric: took 184.564973ms to wait for pod list to return data ...
	I0930 20:02:28.192756   26315 default_sa.go:34] waiting for default service account to be created ...
	I0930 20:02:28.378324   26315 request.go:632] Waited for 185.488908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:02:28.378382   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:02:28.378387   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.378394   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.378398   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.382352   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:28.382515   26315 default_sa.go:45] found service account: "default"
	I0930 20:02:28.382532   26315 default_sa.go:55] duration metric: took 189.767008ms for default service account to be created ...
	I0930 20:02:28.382546   26315 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 20:02:28.578010   26315 request.go:632] Waited for 195.370903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.578070   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.578076   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.578083   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.578087   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.584177   26315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 20:02:28.592272   26315 system_pods.go:86] 24 kube-system pods found
	I0930 20:02:28.592310   26315 system_pods.go:89] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:02:28.592319   26315 system_pods.go:89] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:02:28.592330   26315 system_pods.go:89] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:02:28.592336   26315 system_pods.go:89] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:02:28.592341   26315 system_pods.go:89] "etcd-ha-805293-m03" [c87078d8-ee99-4a5f-9258-cf5d7e658388] Running
	I0930 20:02:28.592346   26315 system_pods.go:89] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:02:28.592351   26315 system_pods.go:89] "kindnet-qrhb8" [852c4080-9210-47bb-a06a-d1b8bcff580d] Running
	I0930 20:02:28.592357   26315 system_pods.go:89] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:02:28.592363   26315 system_pods.go:89] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:02:28.592368   26315 system_pods.go:89] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:02:28.592374   26315 system_pods.go:89] "kube-apiserver-ha-805293-m03" [6fb5a285-7f35-4eb2-b028-6bd9fcfd21fe] Running
	I0930 20:02:28.592381   26315 system_pods.go:89] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:02:28.592388   26315 system_pods.go:89] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:02:28.592397   26315 system_pods.go:89] "kube-controller-manager-ha-805293-m03" [35d67e4a-f434-49df-8fb9-c6fcc725d8ff] Running
	I0930 20:02:28.592404   26315 system_pods.go:89] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:02:28.592410   26315 system_pods.go:89] "kube-proxy-b9cpp" [c828ff6a-6cbb-4a29-84bc-118522687da8] Running
	I0930 20:02:28.592416   26315 system_pods.go:89] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:02:28.592422   26315 system_pods.go:89] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:02:28.592430   26315 system_pods.go:89] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:02:28.592436   26315 system_pods.go:89] "kube-scheduler-ha-805293-m03" [34e2edf8-ca25-4a7c-a626-ac037b40b905] Running
	I0930 20:02:28.592442   26315 system_pods.go:89] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:02:28.592450   26315 system_pods.go:89] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:02:28.592455   26315 system_pods.go:89] "kube-vip-ha-805293-m03" [fcc5a165-5430-45d3-8ec7-fbdf5adc7e20] Running
	I0930 20:02:28.592461   26315 system_pods.go:89] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:02:28.592472   26315 system_pods.go:126] duration metric: took 209.917591ms to wait for k8s-apps to be running ...
	I0930 20:02:28.592485   26315 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 20:02:28.592534   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:02:28.608637   26315 system_svc.go:56] duration metric: took 16.145321ms WaitForService to wait for kubelet
	I0930 20:02:28.608674   26315 kubeadm.go:582] duration metric: took 24.147753749s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:02:28.608696   26315 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:02:28.778132   26315 request.go:632] Waited for 169.34168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes
	I0930 20:02:28.778186   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I0930 20:02:28.778191   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.778198   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.778202   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.782435   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:28.783582   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:02:28.783605   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:02:28.783617   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:02:28.783621   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:02:28.783625   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:02:28.783628   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:02:28.783633   26315 node_conditions.go:105] duration metric: took 174.931399ms to run NodePressure ...
	I0930 20:02:28.783649   26315 start.go:241] waiting for startup goroutines ...
	I0930 20:02:28.783678   26315 start.go:255] writing updated cluster config ...
	I0930 20:02:28.783989   26315 ssh_runner.go:195] Run: rm -f paused
	I0930 20:02:28.838018   26315 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 20:02:28.840509   26315 out.go:177] * Done! kubectl is now configured to use "ha-805293" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.320963938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726774320936952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43fd0057-52f5-4091-817c-eebc2c769853 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.321524460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3f09569-2a62-4970-b1bc-5328ed5240c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.321592129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3f09569-2a62-4970-b1bc-5328ed5240c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.321856670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3f09569-2a62-4970-b1bc-5328ed5240c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.360520471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9cbd13da-18e5-461b-ba5e-5ced76c1b9af name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.360622430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9cbd13da-18e5-461b-ba5e-5ced76c1b9af name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.361955257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9cfe7c18-344d-4efe-bc18-5dfdbdb3b338 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.362613475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726774362577155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9cfe7c18-344d-4efe-bc18-5dfdbdb3b338 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.363087985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ed2e4d9-d41a-421c-96e3-bf73f12705db name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.363155992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ed2e4d9-d41a-421c-96e3-bf73f12705db name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.363445483Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ed2e4d9-d41a-421c-96e3-bf73f12705db name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.401516484Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a780bd3-89bb-4fbd-8911-03fbb01f6812 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.401626763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a780bd3-89bb-4fbd-8911-03fbb01f6812 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.402735590Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc43fde5-fbeb-4c13-a290-c759491ea99a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.403161666Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726774403138458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc43fde5-fbeb-4c13-a290-c759491ea99a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.403939478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=872a9eb3-fae8-4ca2-97ef-dc3ce921959d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.403994587Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=872a9eb3-fae8-4ca2-97ef-dc3ce921959d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.404242000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=872a9eb3-fae8-4ca2-97ef-dc3ce921959d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.447221395Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47f14ecc-15d5-48ff-bc37-e3a31f3d3707 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.447439599Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47f14ecc-15d5-48ff-bc37-e3a31f3d3707 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.448820673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=023790e4-8421-4b9f-9867-fdc4bf310ad5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.449327450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726774449261342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=023790e4-8421-4b9f-9867-fdc4bf310ad5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.451363937Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22e2df4a-a459-4612-8b6b-7ec19c5b005b name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.451442819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22e2df4a-a459-4612-8b6b-7ec19c5b005b name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:14 ha-805293 crio[655]: time="2024-09-30 20:06:14.451686862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22e2df4a-a459-4612-8b6b-7ec19c5b005b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10ee59c77c769       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   a8d4349f6e0b0       busybox-7dff88458-r27jf
	8c540e4668f99       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   f95d30afc0491       coredns-7c65d6cfc9-x7zjp
	6d01ed71d852e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   2a39bd6449f5a       storage-provisioner
	beba42a2bf035       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   626fdaeb1b142       coredns-7c65d6cfc9-z4bkv
	e28b6781ed449       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   36a3293339cae       kindnet-slhtm
	cd73b6dc43348       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   27a0913ae182a       kube-proxy-6gnt4
	5e8e1f537ce94       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   1fd2dbf5f5af0       kube-vip-ha-805293
	0e9fbbe2017da       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   6fc84ff2f4f9e       kube-controller-manager-ha-805293
	9b8d5baa6998a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   73733467afdd9       kube-scheduler-ha-805293
	219dff1c43cd4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   bff718c807eb7       etcd-ha-805293
	994c927aa147a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   ec25e9867db7c       kube-apiserver-ha-805293
	
	
	==> coredns [8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b] <==
	[INFO] 10.244.0.4:54656 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002122445s
	[INFO] 10.244.1.2:43325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000298961s
	[INFO] 10.244.1.2:50368 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261008s
	[INFO] 10.244.1.2:34858 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000270623s
	[INFO] 10.244.1.2:59975 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192447s
	[INFO] 10.244.2.2:37486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233576s
	[INFO] 10.244.2.2:40647 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002177996s
	[INFO] 10.244.2.2:39989 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196915s
	[INFO] 10.244.2.2:42105 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001612348s
	[INFO] 10.244.2.2:42498 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180331s
	[INFO] 10.244.2.2:34873 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000262642s
	[INFO] 10.244.0.4:55282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002337707s
	[INFO] 10.244.0.4:52721 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082276s
	[INFO] 10.244.0.4:33773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001975703s
	[INFO] 10.244.0.4:44087 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095899s
	[INFO] 10.244.1.2:44456 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189431s
	[INFO] 10.244.1.2:52532 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112979s
	[INFO] 10.244.1.2:39707 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095712s
	[INFO] 10.244.2.2:42900 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101241s
	[INFO] 10.244.0.4:56608 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134276s
	[INFO] 10.244.1.2:35939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00031266s
	[INFO] 10.244.1.2:48131 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196792s
	[INFO] 10.244.2.2:40732 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154649s
	[INFO] 10.244.0.4:51180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000206094s
	[INFO] 10.244.0.4:36921 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000118718s
	
	
	==> coredns [beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c] <==
	[INFO] 10.244.0.4:43879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219235s
	[INFO] 10.244.1.2:54557 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005324153s
	[INFO] 10.244.1.2:59221 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00021778s
	[INFO] 10.244.1.2:56069 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0044481s
	[INFO] 10.244.1.2:50386 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00023413s
	[INFO] 10.244.2.2:46506 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103313s
	[INFO] 10.244.2.2:41909 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177677s
	[INFO] 10.244.0.4:57981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180642s
	[INFO] 10.244.0.4:42071 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100781s
	[INFO] 10.244.0.4:53066 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079995s
	[INFO] 10.244.0.4:54192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095317s
	[INFO] 10.244.1.2:42705 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147435s
	[INFO] 10.244.2.2:42448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014108s
	[INFO] 10.244.2.2:58687 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152745s
	[INFO] 10.244.2.2:59433 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159734s
	[INFO] 10.244.0.4:34822 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086009s
	[INFO] 10.244.0.4:46188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067594s
	[INFO] 10.244.0.4:33829 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130532s
	[INFO] 10.244.1.2:56575 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000557946s
	[INFO] 10.244.1.2:41726 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145733s
	[INFO] 10.244.2.2:56116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108892s
	[INFO] 10.244.2.2:58958 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075413s
	[INFO] 10.244.2.2:42001 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077659s
	[INFO] 10.244.0.4:53905 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091303s
	[INFO] 10.244.0.4:41906 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000098967s
	
	
	==> describe nodes <==
	Name:               ha-805293
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T19_59_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 19:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:06:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 20:00:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-805293
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 866f17ca2f8945bb8c8d7336ea64bab7
	  System UUID:                866f17ca-2f89-45bb-8c8d-7336ea64bab7
	  Boot ID:                    688ba3e5-bec7-403a-8a14-d517107abdf5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-r27jf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 coredns-7c65d6cfc9-x7zjp             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m14s
	  kube-system                 coredns-7c65d6cfc9-z4bkv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m14s
	  kube-system                 etcd-ha-805293                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m18s
	  kube-system                 kindnet-slhtm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m14s
	  kube-system                 kube-apiserver-ha-805293             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-controller-manager-ha-805293    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-proxy-6gnt4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-scheduler-ha-805293             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-vip-ha-805293                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m12s  kube-proxy       
	  Normal  Starting                 6m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m18s  kubelet          Node ha-805293 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s  kubelet          Node ha-805293 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s  kubelet          Node ha-805293 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m14s  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal  NodeReady                6m1s   kubelet          Node ha-805293 status is now: NodeReady
	  Normal  RegisteredNode           5m18s  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal  RegisteredNode           4m4s   node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	
	
	Name:               ha-805293-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_00_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:00:48 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:03:41 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-805293-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d0700264de549a1be3f1020308847ab
	  System UUID:                4d070026-4de5-49a1-be3f-1020308847ab
	  Boot ID:                    6a7fa1c9-5f0b-4080-a967-4e6a9eb2c122
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lshpm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 etcd-ha-805293-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m24s
	  kube-system                 kindnet-lfldt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m26s
	  kube-system                 kube-apiserver-ha-805293-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-controller-manager-ha-805293-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-proxy-vptrg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-scheduler-ha-805293-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-vip-ha-805293-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m26s (x8 over 5m27s)  kubelet          Node ha-805293-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s (x8 over 5m27s)  kubelet          Node ha-805293-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s (x7 over 5m27s)  kubelet          Node ha-805293-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-805293-m02 status is now: NodeNotReady
	
	
	Name:               ha-805293-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_02_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:02:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:06:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-805293-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d290a9661d284f5abbb0966111b1ff62
	  System UUID:                d290a966-1d28-4f5a-bbb0-966111b1ff62
	  Boot ID:                    4480564e-4012-421d-8e2a-ef45c5701e0e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nfncv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 etcd-ha-805293-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m11s
	  kube-system                 kindnet-qrhb8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m13s
	  kube-system                 kube-apiserver-ha-805293-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-controller-manager-ha-805293-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-proxy-b9cpp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-scheduler-ha-805293-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-vip-ha-805293-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node ha-805293-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node ha-805293-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m13s)  kubelet          Node ha-805293-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	
	
	Name:               ha-805293-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_03_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:03:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:06:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-805293-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66e464978dbd400d9e13327c67f50978
	  System UUID:                66e46497-8dbd-400d-9e13-327c67f50978
	  Boot ID:                    e58b57f2-9a1b-47d7-b35d-6de7e20bd5ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pk4z9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m7s
	  kube-system                 kube-proxy-7hn94    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m8s)  kubelet          Node ha-805293-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m8s)  kubelet          Node ha-805293-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m8s)  kubelet          Node ha-805293-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal  NodeReady                2m46s                kubelet          Node ha-805293-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep30 19:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051498] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038050] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.756373] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.910183] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.882465] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.789974] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.062566] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063093] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.202518] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.124623] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.268552] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +3.977529] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.564932] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.062130] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.342874] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.088317] kauditd_printk_skb: 79 callbacks suppressed
	[Sep30 20:00] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.197664] kauditd_printk_skb: 38 callbacks suppressed
	[ +40.392588] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c] <==
	{"level":"warn","ts":"2024-09-30T20:06:14.718524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.732877Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.742009Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.742352Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.749586Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.750376Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.754664Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.760922Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.771074Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.779547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.786097Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.789917Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.794159Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.800622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.807232Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.815531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.820094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.823847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.831749Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.839348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.842411Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.847554Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.861403Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.865964Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:14.881759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:06:14 up 6 min,  0 users,  load average: 0.33, 0.27, 0.13
	Linux ha-805293 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa] <==
	I0930 20:05:43.361802       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:05:53.361412       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:05:53.361456       1 main.go:299] handling current node
	I0930 20:05:53.361477       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:05:53.361484       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:05:53.361668       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:05:53.361697       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:05:53.361813       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:05:53.361841       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:06:03.353152       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:06:03.353232       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:06:03.353604       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:06:03.353656       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:06:03.353788       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:06:03.353817       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:06:03.353915       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:06:03.353945       1 main.go:299] handling current node
	I0930 20:06:13.352401       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:06:13.352462       1 main.go:299] handling current node
	I0930 20:06:13.352487       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:06:13.352493       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:06:13.352648       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:06:13.352669       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:06:13.352727       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:06:13.352744       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78] <==
	I0930 19:59:55.232483       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0930 19:59:55.241927       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.3]
	I0930 19:59:55.242751       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 19:59:55.248161       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 19:59:56.585015       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 19:59:56.606454       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0930 19:59:56.717747       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 20:00:00.619178       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0930 20:00:00.866886       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0930 20:02:35.103260       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54756: use of closed network connection
	E0930 20:02:35.310204       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54774: use of closed network connection
	E0930 20:02:35.528451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54798: use of closed network connection
	E0930 20:02:35.718056       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54824: use of closed network connection
	E0930 20:02:35.905602       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54834: use of closed network connection
	E0930 20:02:36.095718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54846: use of closed network connection
	E0930 20:02:36.292842       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54870: use of closed network connection
	E0930 20:02:36.507445       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54880: use of closed network connection
	E0930 20:02:36.711017       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54890: use of closed network connection
	E0930 20:02:37.027891       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54906: use of closed network connection
	E0930 20:02:37.211934       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54928: use of closed network connection
	E0930 20:02:37.400557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54946: use of closed network connection
	E0930 20:02:37.592034       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54964: use of closed network connection
	E0930 20:02:37.769244       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54968: use of closed network connection
	E0930 20:02:37.945689       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54986: use of closed network connection
	W0930 20:04:05.250494       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.227 192.168.39.3]
	
	
	==> kube-controller-manager [0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963] <==
	I0930 20:03:07.394951       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-805293-m04" podCIDRs=["10.244.3.0/24"]
	I0930 20:03:07.395481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:07.396749       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:07.436135       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:07.684943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:08.073414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:10.185795       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-805293-m04"
	I0930 20:03:10.251142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:10.326069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:10.383451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:11.395780       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:11.488119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:17.639978       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:28.022240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:28.023330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-805293-m04"
	I0930 20:03:28.045054       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:30.206023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:37.957274       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:04:25.230773       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-805293-m04"
	I0930 20:04:25.230955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	I0930 20:04:25.255656       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	I0930 20:04:25.398159       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	I0930 20:04:25.408524       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.658854ms"
	I0930 20:04:25.408627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.436µs"
	I0930 20:04:30.476044       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	
	
	==> kube-proxy [cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:00:02.260002       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 20:00:02.292313       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.3"]
	E0930 20:00:02.293761       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:00:02.331058       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:00:02.331111       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:00:02.331136       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:00:02.334264       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:00:02.334706       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:00:02.334732       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:00:02.338075       1 config.go:199] "Starting service config controller"
	I0930 20:00:02.338115       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:00:02.338141       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:00:02.338146       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:00:02.340129       1 config.go:328] "Starting node config controller"
	I0930 20:00:02.340159       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:00:02.438958       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 20:00:02.439119       1 shared_informer.go:320] Caches are synced for service config
	I0930 20:00:02.440633       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463] <==
	W0930 19:59:54.471920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 19:59:54.472044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.522920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.524738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.525008       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 19:59:54.525097       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 19:59:54.570077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.570416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.573175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.573222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.611352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 19:59:54.611460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.614509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 19:59:54.614660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.659257       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 19:59:54.659351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.769876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.770087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 19:59:56.900381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 20:02:01.539050       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-h6pvg\": pod kube-proxy-h6pvg is already assigned to node \"ha-805293-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-h6pvg" node="ha-805293-m03"
	E0930 20:02:01.539424       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9860392c-eca6-4200-9b6e-f0a6f51b523b(kube-system/kube-proxy-h6pvg) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-h6pvg"
	E0930 20:02:01.539482       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-h6pvg\": pod kube-proxy-h6pvg is already assigned to node \"ha-805293-m03\"" pod="kube-system/kube-proxy-h6pvg"
	I0930 20:02:01.539558       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-h6pvg" node="ha-805293-m03"
	E0930 20:02:29.833811       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lshpm\": pod busybox-7dff88458-lshpm is already assigned to node \"ha-805293-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lshpm" node="ha-805293-m02"
	E0930 20:02:29.833910       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lshpm\": pod busybox-7dff88458-lshpm is already assigned to node \"ha-805293-m02\"" pod="default/busybox-7dff88458-lshpm"
	
	
	==> kubelet <==
	Sep 30 20:04:56 ha-805293 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 20:04:56 ha-805293 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 20:04:56 ha-805293 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 20:04:56 ha-805293 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 20:04:56 ha-805293 kubelet[1307]: E0930 20:04:56.831137    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726696830908263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:04:56 ha-805293 kubelet[1307]: E0930 20:04:56.831174    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726696830908263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:06 ha-805293 kubelet[1307]: E0930 20:05:06.833436    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726706832581949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:06 ha-805293 kubelet[1307]: E0930 20:05:06.834135    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726706832581949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:16 ha-805293 kubelet[1307]: E0930 20:05:16.840697    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726716835840638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:16 ha-805293 kubelet[1307]: E0930 20:05:16.841087    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726716835840638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:26 ha-805293 kubelet[1307]: E0930 20:05:26.843795    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726726842473695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:26 ha-805293 kubelet[1307]: E0930 20:05:26.843820    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726726842473695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:36 ha-805293 kubelet[1307]: E0930 20:05:36.846940    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726736846123824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:36 ha-805293 kubelet[1307]: E0930 20:05:36.847349    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726736846123824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:46 ha-805293 kubelet[1307]: E0930 20:05:46.849818    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726746849247125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:46 ha-805293 kubelet[1307]: E0930 20:05:46.850141    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726746849247125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:56 ha-805293 kubelet[1307]: E0930 20:05:56.740673    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 20:05:56 ha-805293 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 20:05:56 ha-805293 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 20:05:56 ha-805293 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 20:05:56 ha-805293 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 20:05:56 ha-805293 kubelet[1307]: E0930 20:05:56.852143    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726756851671468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:56 ha-805293 kubelet[1307]: E0930 20:05:56.852175    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726756851671468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:06 ha-805293 kubelet[1307]: E0930 20:06:06.854020    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726766853679089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:06 ha-805293 kubelet[1307]: E0930 20:06:06.854344    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726766853679089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-805293 -n ha-805293
helpers_test.go:261: (dbg) Run:  kubectl --context ha-805293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr: (3.981900117s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-805293 -n ha-805293
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-805293 logs -n 25: (1.364821584s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293:/home/docker/cp-test_ha-805293-m03_ha-805293.txt                       |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293 sudo cat                                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293.txt                                 |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m02:/home/docker/cp-test_ha-805293-m03_ha-805293-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m02 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04:/home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m04 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp testdata/cp-test.txt                                                | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3144947660/001/cp-test_ha-805293-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293:/home/docker/cp-test_ha-805293-m04_ha-805293.txt                       |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293 sudo cat                                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293.txt                                 |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m02:/home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m02 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03:/home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m03 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-805293 node stop m02 -v=7                                                     | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-805293 node start m02 -v=7                                                    | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 19:59:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 19:59:16.465113   26315 out.go:345] Setting OutFile to fd 1 ...
	I0930 19:59:16.465408   26315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:59:16.465418   26315 out.go:358] Setting ErrFile to fd 2...
	I0930 19:59:16.465423   26315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:59:16.465672   26315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 19:59:16.466270   26315 out.go:352] Setting JSON to false
	I0930 19:59:16.467246   26315 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2499,"bootTime":1727723857,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 19:59:16.467349   26315 start.go:139] virtualization: kvm guest
	I0930 19:59:16.469778   26315 out.go:177] * [ha-805293] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 19:59:16.471083   26315 notify.go:220] Checking for updates...
	I0930 19:59:16.471129   26315 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 19:59:16.472574   26315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 19:59:16.474040   26315 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:59:16.475378   26315 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:59:16.476781   26315 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 19:59:16.478196   26315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 19:59:16.479555   26315 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 19:59:16.514287   26315 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 19:59:16.515592   26315 start.go:297] selected driver: kvm2
	I0930 19:59:16.515604   26315 start.go:901] validating driver "kvm2" against <nil>
	I0930 19:59:16.515615   26315 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 19:59:16.516299   26315 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:59:16.516372   26315 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 19:59:16.531012   26315 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 19:59:16.531063   26315 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 19:59:16.531292   26315 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 19:59:16.531318   26315 cni.go:84] Creating CNI manager for ""
	I0930 19:59:16.531357   26315 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0930 19:59:16.531370   26315 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 19:59:16.531430   26315 start.go:340] cluster config:
	{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0930 19:59:16.531545   26315 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:59:16.533673   26315 out.go:177] * Starting "ha-805293" primary control-plane node in "ha-805293" cluster
	I0930 19:59:16.534957   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:59:16.535009   26315 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 19:59:16.535023   26315 cache.go:56] Caching tarball of preloaded images
	I0930 19:59:16.535111   26315 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 19:59:16.535121   26315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 19:59:16.535489   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 19:59:16.535515   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json: {Name:mk695bb0575a50d6b6d53e3d2c18bb8666421806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:16.535704   26315 start.go:360] acquireMachinesLock for ha-805293: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 19:59:16.535734   26315 start.go:364] duration metric: took 15.84µs to acquireMachinesLock for "ha-805293"
	I0930 19:59:16.535751   26315 start.go:93] Provisioning new machine with config: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 19:59:16.535821   26315 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 19:59:16.537498   26315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 19:59:16.537633   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:59:16.537678   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:59:16.552377   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I0930 19:59:16.552824   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:59:16.553523   26315 main.go:141] libmachine: Using API Version  1
	I0930 19:59:16.553548   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:59:16.553949   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:59:16.554153   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:16.554354   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:16.554484   26315 start.go:159] libmachine.API.Create for "ha-805293" (driver="kvm2")
	I0930 19:59:16.554517   26315 client.go:168] LocalClient.Create starting
	I0930 19:59:16.554565   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 19:59:16.554602   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 19:59:16.554620   26315 main.go:141] libmachine: Parsing certificate...
	I0930 19:59:16.554688   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 19:59:16.554716   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 19:59:16.554736   26315 main.go:141] libmachine: Parsing certificate...
	I0930 19:59:16.554758   26315 main.go:141] libmachine: Running pre-create checks...
	I0930 19:59:16.554770   26315 main.go:141] libmachine: (ha-805293) Calling .PreCreateCheck
	I0930 19:59:16.555128   26315 main.go:141] libmachine: (ha-805293) Calling .GetConfigRaw
	I0930 19:59:16.555744   26315 main.go:141] libmachine: Creating machine...
	I0930 19:59:16.555765   26315 main.go:141] libmachine: (ha-805293) Calling .Create
	I0930 19:59:16.555931   26315 main.go:141] libmachine: (ha-805293) Creating KVM machine...
	I0930 19:59:16.557277   26315 main.go:141] libmachine: (ha-805293) DBG | found existing default KVM network
	I0930 19:59:16.557963   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:16.557842   26338 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I0930 19:59:16.558012   26315 main.go:141] libmachine: (ha-805293) DBG | created network xml: 
	I0930 19:59:16.558024   26315 main.go:141] libmachine: (ha-805293) DBG | <network>
	I0930 19:59:16.558032   26315 main.go:141] libmachine: (ha-805293) DBG |   <name>mk-ha-805293</name>
	I0930 19:59:16.558037   26315 main.go:141] libmachine: (ha-805293) DBG |   <dns enable='no'/>
	I0930 19:59:16.558041   26315 main.go:141] libmachine: (ha-805293) DBG |   
	I0930 19:59:16.558052   26315 main.go:141] libmachine: (ha-805293) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0930 19:59:16.558057   26315 main.go:141] libmachine: (ha-805293) DBG |     <dhcp>
	I0930 19:59:16.558063   26315 main.go:141] libmachine: (ha-805293) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0930 19:59:16.558073   26315 main.go:141] libmachine: (ha-805293) DBG |     </dhcp>
	I0930 19:59:16.558087   26315 main.go:141] libmachine: (ha-805293) DBG |   </ip>
	I0930 19:59:16.558111   26315 main.go:141] libmachine: (ha-805293) DBG |   
	I0930 19:59:16.558145   26315 main.go:141] libmachine: (ha-805293) DBG | </network>
	I0930 19:59:16.558156   26315 main.go:141] libmachine: (ha-805293) DBG | 
	I0930 19:59:16.563671   26315 main.go:141] libmachine: (ha-805293) DBG | trying to create private KVM network mk-ha-805293 192.168.39.0/24...
	I0930 19:59:16.628841   26315 main.go:141] libmachine: (ha-805293) DBG | private KVM network mk-ha-805293 192.168.39.0/24 created
	I0930 19:59:16.628870   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:16.628827   26338 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:59:16.628892   26315 main.go:141] libmachine: (ha-805293) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293 ...
	I0930 19:59:16.628909   26315 main.go:141] libmachine: (ha-805293) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 19:59:16.629064   26315 main.go:141] libmachine: (ha-805293) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 19:59:16.879937   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:16.879799   26338 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa...
	I0930 19:59:17.039302   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:17.039101   26338 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/ha-805293.rawdisk...
	I0930 19:59:17.039341   26315 main.go:141] libmachine: (ha-805293) DBG | Writing magic tar header
	I0930 19:59:17.039359   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293 (perms=drwx------)
	I0930 19:59:17.039382   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 19:59:17.039389   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 19:59:17.039398   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 19:59:17.039404   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 19:59:17.039415   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 19:59:17.039420   26315 main.go:141] libmachine: (ha-805293) Creating domain...
	I0930 19:59:17.039450   26315 main.go:141] libmachine: (ha-805293) DBG | Writing SSH key tar header
	I0930 19:59:17.039468   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:17.039218   26338 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293 ...
	I0930 19:59:17.039478   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293
	I0930 19:59:17.039485   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 19:59:17.039546   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:59:17.039570   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 19:59:17.039613   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 19:59:17.039667   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins
	I0930 19:59:17.039707   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home
	I0930 19:59:17.039720   26315 main.go:141] libmachine: (ha-805293) DBG | Skipping /home - not owner
	I0930 19:59:17.040595   26315 main.go:141] libmachine: (ha-805293) define libvirt domain using xml: 
	I0930 19:59:17.040607   26315 main.go:141] libmachine: (ha-805293) <domain type='kvm'>
	I0930 19:59:17.040612   26315 main.go:141] libmachine: (ha-805293)   <name>ha-805293</name>
	I0930 19:59:17.040617   26315 main.go:141] libmachine: (ha-805293)   <memory unit='MiB'>2200</memory>
	I0930 19:59:17.040621   26315 main.go:141] libmachine: (ha-805293)   <vcpu>2</vcpu>
	I0930 19:59:17.040625   26315 main.go:141] libmachine: (ha-805293)   <features>
	I0930 19:59:17.040630   26315 main.go:141] libmachine: (ha-805293)     <acpi/>
	I0930 19:59:17.040633   26315 main.go:141] libmachine: (ha-805293)     <apic/>
	I0930 19:59:17.040638   26315 main.go:141] libmachine: (ha-805293)     <pae/>
	I0930 19:59:17.040642   26315 main.go:141] libmachine: (ha-805293)     
	I0930 19:59:17.040649   26315 main.go:141] libmachine: (ha-805293)   </features>
	I0930 19:59:17.040654   26315 main.go:141] libmachine: (ha-805293)   <cpu mode='host-passthrough'>
	I0930 19:59:17.040661   26315 main.go:141] libmachine: (ha-805293)   
	I0930 19:59:17.040664   26315 main.go:141] libmachine: (ha-805293)   </cpu>
	I0930 19:59:17.040671   26315 main.go:141] libmachine: (ha-805293)   <os>
	I0930 19:59:17.040675   26315 main.go:141] libmachine: (ha-805293)     <type>hvm</type>
	I0930 19:59:17.040680   26315 main.go:141] libmachine: (ha-805293)     <boot dev='cdrom'/>
	I0930 19:59:17.040692   26315 main.go:141] libmachine: (ha-805293)     <boot dev='hd'/>
	I0930 19:59:17.040703   26315 main.go:141] libmachine: (ha-805293)     <bootmenu enable='no'/>
	I0930 19:59:17.040714   26315 main.go:141] libmachine: (ha-805293)   </os>
	I0930 19:59:17.040724   26315 main.go:141] libmachine: (ha-805293)   <devices>
	I0930 19:59:17.040732   26315 main.go:141] libmachine: (ha-805293)     <disk type='file' device='cdrom'>
	I0930 19:59:17.040739   26315 main.go:141] libmachine: (ha-805293)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/boot2docker.iso'/>
	I0930 19:59:17.040757   26315 main.go:141] libmachine: (ha-805293)       <target dev='hdc' bus='scsi'/>
	I0930 19:59:17.040766   26315 main.go:141] libmachine: (ha-805293)       <readonly/>
	I0930 19:59:17.040770   26315 main.go:141] libmachine: (ha-805293)     </disk>
	I0930 19:59:17.040776   26315 main.go:141] libmachine: (ha-805293)     <disk type='file' device='disk'>
	I0930 19:59:17.040783   26315 main.go:141] libmachine: (ha-805293)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 19:59:17.040791   26315 main.go:141] libmachine: (ha-805293)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/ha-805293.rawdisk'/>
	I0930 19:59:17.040797   26315 main.go:141] libmachine: (ha-805293)       <target dev='hda' bus='virtio'/>
	I0930 19:59:17.040802   26315 main.go:141] libmachine: (ha-805293)     </disk>
	I0930 19:59:17.040808   26315 main.go:141] libmachine: (ha-805293)     <interface type='network'>
	I0930 19:59:17.040814   26315 main.go:141] libmachine: (ha-805293)       <source network='mk-ha-805293'/>
	I0930 19:59:17.040822   26315 main.go:141] libmachine: (ha-805293)       <model type='virtio'/>
	I0930 19:59:17.040829   26315 main.go:141] libmachine: (ha-805293)     </interface>
	I0930 19:59:17.040833   26315 main.go:141] libmachine: (ha-805293)     <interface type='network'>
	I0930 19:59:17.040840   26315 main.go:141] libmachine: (ha-805293)       <source network='default'/>
	I0930 19:59:17.040844   26315 main.go:141] libmachine: (ha-805293)       <model type='virtio'/>
	I0930 19:59:17.040850   26315 main.go:141] libmachine: (ha-805293)     </interface>
	I0930 19:59:17.040855   26315 main.go:141] libmachine: (ha-805293)     <serial type='pty'>
	I0930 19:59:17.040860   26315 main.go:141] libmachine: (ha-805293)       <target port='0'/>
	I0930 19:59:17.040865   26315 main.go:141] libmachine: (ha-805293)     </serial>
	I0930 19:59:17.040871   26315 main.go:141] libmachine: (ha-805293)     <console type='pty'>
	I0930 19:59:17.040877   26315 main.go:141] libmachine: (ha-805293)       <target type='serial' port='0'/>
	I0930 19:59:17.040882   26315 main.go:141] libmachine: (ha-805293)     </console>
	I0930 19:59:17.040888   26315 main.go:141] libmachine: (ha-805293)     <rng model='virtio'>
	I0930 19:59:17.040894   26315 main.go:141] libmachine: (ha-805293)       <backend model='random'>/dev/random</backend>
	I0930 19:59:17.040901   26315 main.go:141] libmachine: (ha-805293)     </rng>
	I0930 19:59:17.040907   26315 main.go:141] libmachine: (ha-805293)     
	I0930 19:59:17.040917   26315 main.go:141] libmachine: (ha-805293)     
	I0930 19:59:17.040925   26315 main.go:141] libmachine: (ha-805293)   </devices>
	I0930 19:59:17.040928   26315 main.go:141] libmachine: (ha-805293) </domain>
	I0930 19:59:17.040937   26315 main.go:141] libmachine: (ha-805293) 
	I0930 19:59:17.045576   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:16:26:46 in network default
	I0930 19:59:17.046091   26315 main.go:141] libmachine: (ha-805293) Ensuring networks are active...
	I0930 19:59:17.046110   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:17.046918   26315 main.go:141] libmachine: (ha-805293) Ensuring network default is active
	I0930 19:59:17.047170   26315 main.go:141] libmachine: (ha-805293) Ensuring network mk-ha-805293 is active
	I0930 19:59:17.048069   26315 main.go:141] libmachine: (ha-805293) Getting domain xml...
	I0930 19:59:17.048925   26315 main.go:141] libmachine: (ha-805293) Creating domain...
	I0930 19:59:18.262935   26315 main.go:141] libmachine: (ha-805293) Waiting to get IP...
	I0930 19:59:18.263713   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:18.264097   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:18.264150   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:18.264077   26338 retry.go:31] will retry after 272.130038ms: waiting for machine to come up
	I0930 19:59:18.537624   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:18.538207   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:18.538236   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:18.538152   26338 retry.go:31] will retry after 384.976128ms: waiting for machine to come up
	I0930 19:59:18.924813   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:18.925224   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:18.925244   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:18.925193   26338 retry.go:31] will retry after 439.036671ms: waiting for machine to come up
	I0930 19:59:19.365792   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:19.366237   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:19.366268   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:19.366201   26338 retry.go:31] will retry after 523.251996ms: waiting for machine to come up
	I0930 19:59:19.890884   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:19.891377   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:19.891399   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:19.891276   26338 retry.go:31] will retry after 505.591634ms: waiting for machine to come up
	I0930 19:59:20.398064   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:20.398495   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:20.398518   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:20.398434   26338 retry.go:31] will retry after 840.243199ms: waiting for machine to come up
	I0930 19:59:21.240528   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:21.240974   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:21.241011   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:21.240928   26338 retry.go:31] will retry after 727.422374ms: waiting for machine to come up
	I0930 19:59:21.970399   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:21.970994   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:21.971027   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:21.970937   26338 retry.go:31] will retry after 1.250553906s: waiting for machine to come up
	I0930 19:59:23.223257   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:23.223588   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:23.223617   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:23.223524   26338 retry.go:31] will retry after 1.498180761s: waiting for machine to come up
	I0930 19:59:24.724089   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:24.724526   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:24.724547   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:24.724490   26338 retry.go:31] will retry after 1.710980244s: waiting for machine to come up
	I0930 19:59:26.437365   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:26.437733   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:26.437791   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:26.437707   26338 retry.go:31] will retry after 1.996131833s: waiting for machine to come up
	I0930 19:59:28.435394   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:28.435899   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:28.435920   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:28.435854   26338 retry.go:31] will retry after 2.313700889s: waiting for machine to come up
	I0930 19:59:30.752853   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:30.753113   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:30.753140   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:30.753096   26338 retry.go:31] will retry after 2.892875975s: waiting for machine to come up
	I0930 19:59:33.648697   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:33.649006   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:33.649067   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:33.648958   26338 retry.go:31] will retry after 4.162794884s: waiting for machine to come up
	I0930 19:59:37.813324   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.813940   26315 main.go:141] libmachine: (ha-805293) Found IP for machine: 192.168.39.3
	I0930 19:59:37.813967   26315 main.go:141] libmachine: (ha-805293) Reserving static IP address...
	I0930 19:59:37.813980   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has current primary IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.814363   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find host DHCP lease matching {name: "ha-805293", mac: "52:54:00:a8:b8:c7", ip: "192.168.39.3"} in network mk-ha-805293
	I0930 19:59:37.894677   26315 main.go:141] libmachine: (ha-805293) DBG | Getting to WaitForSSH function...
	I0930 19:59:37.894706   26315 main.go:141] libmachine: (ha-805293) Reserved static IP address: 192.168.39.3
	I0930 19:59:37.894719   26315 main.go:141] libmachine: (ha-805293) Waiting for SSH to be available...
	I0930 19:59:37.897595   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.897922   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:37.897956   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.898087   26315 main.go:141] libmachine: (ha-805293) DBG | Using SSH client type: external
	I0930 19:59:37.898106   26315 main.go:141] libmachine: (ha-805293) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa (-rw-------)
	I0930 19:59:37.898139   26315 main.go:141] libmachine: (ha-805293) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 19:59:37.898155   26315 main.go:141] libmachine: (ha-805293) DBG | About to run SSH command:
	I0930 19:59:37.898169   26315 main.go:141] libmachine: (ha-805293) DBG | exit 0
	I0930 19:59:38.031893   26315 main.go:141] libmachine: (ha-805293) DBG | SSH cmd err, output: <nil>: 
	I0930 19:59:38.032180   26315 main.go:141] libmachine: (ha-805293) KVM machine creation complete!
	I0930 19:59:38.032650   26315 main.go:141] libmachine: (ha-805293) Calling .GetConfigRaw
	I0930 19:59:38.033332   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:38.033535   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:38.033703   26315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 19:59:38.033722   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 19:59:38.035148   26315 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 19:59:38.035166   26315 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 19:59:38.035171   26315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 19:59:38.035176   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.037430   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.037779   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.037807   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.037886   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.038058   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.038172   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.038292   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.038466   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.038732   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.038742   26315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 19:59:38.150707   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:59:38.150736   26315 main.go:141] libmachine: Detecting the provisioner...
	I0930 19:59:38.150744   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.153577   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.153985   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.154015   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.154165   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.154420   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.154616   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.154796   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.154961   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.155144   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.155155   26315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 19:59:38.268071   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 19:59:38.268223   26315 main.go:141] libmachine: found compatible host: buildroot
	I0930 19:59:38.268235   26315 main.go:141] libmachine: Provisioning with buildroot...
	I0930 19:59:38.268248   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:38.268485   26315 buildroot.go:166] provisioning hostname "ha-805293"
	I0930 19:59:38.268519   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:38.268699   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.271029   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.271351   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.271376   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.271551   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.271727   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.271905   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.272048   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.272215   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.272420   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.272431   26315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293 && echo "ha-805293" | sudo tee /etc/hostname
	I0930 19:59:38.397989   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293
	
	I0930 19:59:38.398019   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.401388   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.401792   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.401818   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.402043   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.402262   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.402446   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.402640   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.402835   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.403014   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.403030   26315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 19:59:38.523981   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:59:38.524025   26315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 19:59:38.524082   26315 buildroot.go:174] setting up certificates
	I0930 19:59:38.524097   26315 provision.go:84] configureAuth start
	I0930 19:59:38.524111   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:38.524383   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:38.527277   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.527630   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.527658   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.527836   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.530619   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.530940   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.530964   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.531100   26315 provision.go:143] copyHostCerts
	I0930 19:59:38.531123   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 19:59:38.531167   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 19:59:38.531177   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 19:59:38.531239   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 19:59:38.531347   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 19:59:38.531367   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 19:59:38.531371   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 19:59:38.531397   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 19:59:38.531451   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 19:59:38.531467   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 19:59:38.531473   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 19:59:38.531511   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 19:59:38.531604   26315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293 san=[127.0.0.1 192.168.39.3 ha-805293 localhost minikube]
	I0930 19:59:38.676763   26315 provision.go:177] copyRemoteCerts
	I0930 19:59:38.676824   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 19:59:38.676847   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.679571   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.680006   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.680032   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.680205   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.680392   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.680556   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.680720   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:38.765532   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 19:59:38.765609   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 19:59:38.789748   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 19:59:38.789818   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 19:59:38.811783   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 19:59:38.811868   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 19:59:38.834125   26315 provision.go:87] duration metric: took 310.01212ms to configureAuth
	I0930 19:59:38.834160   26315 buildroot.go:189] setting minikube options for container-runtime
	I0930 19:59:38.834431   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:59:38.834524   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.837303   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.837631   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.837775   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.838052   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.838232   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.838399   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.838530   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.838676   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.838897   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.838918   26315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 19:59:39.069352   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 19:59:39.069381   26315 main.go:141] libmachine: Checking connection to Docker...
	I0930 19:59:39.069395   26315 main.go:141] libmachine: (ha-805293) Calling .GetURL
	I0930 19:59:39.070641   26315 main.go:141] libmachine: (ha-805293) DBG | Using libvirt version 6000000
	I0930 19:59:39.073164   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.073482   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.073521   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.073664   26315 main.go:141] libmachine: Docker is up and running!
	I0930 19:59:39.073675   26315 main.go:141] libmachine: Reticulating splines...
	I0930 19:59:39.073688   26315 client.go:171] duration metric: took 22.519163927s to LocalClient.Create
	I0930 19:59:39.073710   26315 start.go:167] duration metric: took 22.519226404s to libmachine.API.Create "ha-805293"
	I0930 19:59:39.073725   26315 start.go:293] postStartSetup for "ha-805293" (driver="kvm2")
	I0930 19:59:39.073739   26315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 19:59:39.073759   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.073979   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 19:59:39.074068   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.076481   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.076820   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.076872   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.076969   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.077131   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.077256   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.077345   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:39.162144   26315 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 19:59:39.166524   26315 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 19:59:39.166551   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 19:59:39.166625   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 19:59:39.166691   26315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 19:59:39.166701   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 19:59:39.166826   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 19:59:39.175862   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 19:59:39.198495   26315 start.go:296] duration metric: took 124.748363ms for postStartSetup
	I0930 19:59:39.198552   26315 main.go:141] libmachine: (ha-805293) Calling .GetConfigRaw
	I0930 19:59:39.199175   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:39.202045   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.202447   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.202472   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.202702   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 19:59:39.202915   26315 start.go:128] duration metric: took 22.667085053s to createHost
	I0930 19:59:39.202950   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.205157   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.205495   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.205516   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.205668   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.205846   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.205981   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.206111   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.206270   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:39.206542   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:39.206565   26315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 19:59:39.320050   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727726379.295271539
	
	I0930 19:59:39.320076   26315 fix.go:216] guest clock: 1727726379.295271539
	I0930 19:59:39.320086   26315 fix.go:229] Guest: 2024-09-30 19:59:39.295271539 +0000 UTC Remote: 2024-09-30 19:59:39.202937168 +0000 UTC m=+22.774027114 (delta=92.334371ms)
	I0930 19:59:39.320118   26315 fix.go:200] guest clock delta is within tolerance: 92.334371ms
	I0930 19:59:39.320128   26315 start.go:83] releasing machines lock for "ha-805293", held for 22.784384982s
	I0930 19:59:39.320156   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.320464   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:39.323340   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.323749   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.323763   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.323980   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.324511   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.324710   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.324873   26315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 19:59:39.324922   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.324933   26315 ssh_runner.go:195] Run: cat /version.json
	I0930 19:59:39.324953   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.327479   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.327790   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.327833   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.327954   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.327975   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.328205   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.328371   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.328394   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.328435   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.328560   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.328620   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:39.328752   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.328910   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.329053   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:39.449869   26315 ssh_runner.go:195] Run: systemctl --version
	I0930 19:59:39.457140   26315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 19:59:39.620534   26315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 19:59:39.626812   26315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 19:59:39.626884   26315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 19:59:39.643150   26315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 19:59:39.643182   26315 start.go:495] detecting cgroup driver to use...
	I0930 19:59:39.643259   26315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 19:59:39.659582   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 19:59:39.673481   26315 docker.go:217] disabling cri-docker service (if available) ...
	I0930 19:59:39.673546   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 19:59:39.687166   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 19:59:39.700766   26315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 19:59:39.817845   26315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 19:59:39.989160   26315 docker.go:233] disabling docker service ...
	I0930 19:59:39.989251   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 19:59:40.003138   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 19:59:40.016004   26315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 19:59:40.149065   26315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 19:59:40.264254   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 19:59:40.278167   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 19:59:40.296364   26315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 19:59:40.296421   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.306661   26315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 19:59:40.306731   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.317138   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.327466   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.337951   26315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 19:59:40.348585   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.358684   26315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.375315   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.385587   26315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 19:59:40.394996   26315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 19:59:40.395092   26315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 19:59:40.408121   26315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 19:59:40.417783   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:59:40.532464   26315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 19:59:40.627203   26315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 19:59:40.627277   26315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 19:59:40.632142   26315 start.go:563] Will wait 60s for crictl version
	I0930 19:59:40.632198   26315 ssh_runner.go:195] Run: which crictl
	I0930 19:59:40.635892   26315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 19:59:40.673372   26315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 19:59:40.673453   26315 ssh_runner.go:195] Run: crio --version
	I0930 19:59:40.701810   26315 ssh_runner.go:195] Run: crio --version
	I0930 19:59:40.733603   26315 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 19:59:40.734810   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:40.737789   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:40.738162   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:40.738188   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:40.738414   26315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 19:59:40.742812   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:59:40.755762   26315 kubeadm.go:883] updating cluster {Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 19:59:40.755880   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:59:40.755941   26315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:59:40.795843   26315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 19:59:40.795919   26315 ssh_runner.go:195] Run: which lz4
	I0930 19:59:40.799847   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0930 19:59:40.799948   26315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 19:59:40.803954   26315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 19:59:40.803978   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 19:59:42.086885   26315 crio.go:462] duration metric: took 1.286971524s to copy over tarball
	I0930 19:59:42.086956   26315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 19:59:44.140911   26315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.053919148s)
	I0930 19:59:44.140946   26315 crio.go:469] duration metric: took 2.054033393s to extract the tarball
	I0930 19:59:44.140956   26315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 19:59:44.176934   26315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:59:44.223432   26315 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 19:59:44.223453   26315 cache_images.go:84] Images are preloaded, skipping loading
	I0930 19:59:44.223463   26315 kubeadm.go:934] updating node { 192.168.39.3 8443 v1.31.1 crio true true} ...
	I0930 19:59:44.223618   26315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 19:59:44.223687   26315 ssh_runner.go:195] Run: crio config
	I0930 19:59:44.267892   26315 cni.go:84] Creating CNI manager for ""
	I0930 19:59:44.267913   26315 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 19:59:44.267927   26315 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 19:59:44.267969   26315 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-805293 NodeName:ha-805293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 19:59:44.268143   26315 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-805293"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 19:59:44.268174   26315 kube-vip.go:115] generating kube-vip config ...
	I0930 19:59:44.268226   26315 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 19:59:44.290057   26315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 19:59:44.290186   26315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0930 19:59:44.290252   26315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 19:59:44.300619   26315 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 19:59:44.300694   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 19:59:44.312702   26315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0930 19:59:44.329980   26315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 19:59:44.347106   26315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0930 19:59:44.363429   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0930 19:59:44.379706   26315 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 19:59:44.383786   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:59:44.396392   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:59:44.511834   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 19:59:44.528890   26315 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.3
	I0930 19:59:44.528918   26315 certs.go:194] generating shared ca certs ...
	I0930 19:59:44.528990   26315 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.529203   26315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 19:59:44.529261   26315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 19:59:44.529273   26315 certs.go:256] generating profile certs ...
	I0930 19:59:44.529338   26315 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 19:59:44.529377   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt with IP's: []
	I0930 19:59:44.693203   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt ...
	I0930 19:59:44.693232   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt: {Name:mk4ee04dd06bd91d73f7f1298e33968b422b097c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.693403   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key ...
	I0930 19:59:44.693413   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key: {Name:mk2b8ad6c09983ddb0203e6dca1df4008d2fe717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.693487   26315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78
	I0930 19:59:44.693501   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.254]
	I0930 19:59:44.767682   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78 ...
	I0930 19:59:44.767709   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78: {Name:mkf1b16d36ab45268d051f89cfe928869656e760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.767864   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78 ...
	I0930 19:59:44.767875   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78: {Name:mk53eca62135b4c1b261b7c937012d89f293e976 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.767944   26315 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 19:59:44.768026   26315 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 19:59:44.768082   26315 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 19:59:44.768096   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt with IP's: []
	I0930 19:59:45.223535   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt ...
	I0930 19:59:45.223567   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt: {Name:mke738cc3ccc573243158c6f5e5f022828f32c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:45.223723   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key ...
	I0930 19:59:45.223733   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key: {Name:mkbfe8ac8fc7a409b1152c27d19ceb3cdc436834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:45.223814   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 19:59:45.223831   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 19:59:45.223844   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 19:59:45.223854   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 19:59:45.223865   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 19:59:45.223889   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 19:59:45.223908   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 19:59:45.223920   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 19:59:45.223964   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 19:59:45.224006   26315 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 19:59:45.224013   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 19:59:45.224036   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 19:59:45.224057   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 19:59:45.224083   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 19:59:45.224119   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 19:59:45.224143   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.224156   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.224168   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.224809   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 19:59:45.251773   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 19:59:45.283221   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 19:59:45.307169   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 19:59:45.340795   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 19:59:45.364921   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 19:59:45.388786   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 19:59:45.412412   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 19:59:45.437530   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 19:59:45.462538   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 19:59:45.486247   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 19:59:45.510070   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 19:59:45.527040   26315 ssh_runner.go:195] Run: openssl version
	I0930 19:59:45.532953   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 19:59:45.544314   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.548732   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.548808   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.554737   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 19:59:45.565237   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 19:59:45.576275   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.580833   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.580899   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.586723   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 19:59:45.597151   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 19:59:45.607829   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.612479   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.612538   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.618560   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 19:59:45.629886   26315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 19:59:45.634469   26315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 19:59:45.634548   26315 kubeadm.go:392] StartCluster: {Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:59:45.634646   26315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 19:59:45.634717   26315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 19:59:45.672608   26315 cri.go:89] found id: ""
	I0930 19:59:45.672680   26315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 19:59:45.682253   26315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 19:59:45.695746   26315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 19:59:45.707747   26315 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 19:59:45.707771   26315 kubeadm.go:157] found existing configuration files:
	
	I0930 19:59:45.707824   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 19:59:45.717218   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 19:59:45.717271   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 19:59:45.727134   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 19:59:45.736453   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 19:59:45.736514   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 19:59:45.746137   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 19:59:45.755226   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 19:59:45.755300   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 19:59:45.765188   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 19:59:45.774772   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 19:59:45.774830   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 19:59:45.784513   26315 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 19:59:45.891942   26315 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 19:59:45.891997   26315 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 19:59:45.998241   26315 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 19:59:45.998404   26315 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 19:59:45.998552   26315 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 19:59:46.014075   26315 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 19:59:46.112806   26315 out.go:235]   - Generating certificates and keys ...
	I0930 19:59:46.112955   26315 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 19:59:46.113026   26315 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 19:59:46.210951   26315 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 19:59:46.354582   26315 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 19:59:46.555785   26315 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 19:59:46.646311   26315 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 19:59:46.770735   26315 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 19:59:46.770873   26315 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-805293 localhost] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0930 19:59:47.044600   26315 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 19:59:47.044796   26315 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-805293 localhost] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0930 19:59:47.135575   26315 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 19:59:47.309550   26315 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 19:59:47.407346   26315 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 19:59:47.407491   26315 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 19:59:47.782301   26315 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 19:59:47.938840   26315 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 19:59:48.153368   26315 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 19:59:48.373848   26315 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 19:59:48.924719   26315 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 19:59:48.925435   26315 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 19:59:48.929527   26315 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 19:59:48.931731   26315 out.go:235]   - Booting up control plane ...
	I0930 19:59:48.931901   26315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 19:59:48.931984   26315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 19:59:48.932610   26315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 19:59:48.952672   26315 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 19:59:48.959981   26315 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 19:59:48.960193   26315 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 19:59:49.095726   26315 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 19:59:49.095850   26315 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 19:59:49.596721   26315 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.116798ms
	I0930 19:59:49.596826   26315 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 19:59:55.702855   26315 kubeadm.go:310] [api-check] The API server is healthy after 6.110016436s
	I0930 19:59:55.715163   26315 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 19:59:55.739975   26315 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 19:59:56.278812   26315 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 19:59:56.279051   26315 kubeadm.go:310] [mark-control-plane] Marking the node ha-805293 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 19:59:56.293005   26315 kubeadm.go:310] [bootstrap-token] Using token: p0s0d4.yc45k5nzuh1mipkz
	I0930 19:59:56.294535   26315 out.go:235]   - Configuring RBAC rules ...
	I0930 19:59:56.294681   26315 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 19:59:56.299474   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 19:59:56.308838   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 19:59:56.312908   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 19:59:56.320143   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 19:59:56.328834   26315 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 19:59:56.351618   26315 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 19:59:56.617778   26315 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 19:59:57.116458   26315 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 19:59:57.116486   26315 kubeadm.go:310] 
	I0930 19:59:57.116560   26315 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 19:59:57.116570   26315 kubeadm.go:310] 
	I0930 19:59:57.116674   26315 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 19:59:57.116685   26315 kubeadm.go:310] 
	I0930 19:59:57.116719   26315 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 19:59:57.116823   26315 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 19:59:57.116882   26315 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 19:59:57.116886   26315 kubeadm.go:310] 
	I0930 19:59:57.116955   26315 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 19:59:57.116980   26315 kubeadm.go:310] 
	I0930 19:59:57.117053   26315 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 19:59:57.117064   26315 kubeadm.go:310] 
	I0930 19:59:57.117137   26315 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 19:59:57.117202   26315 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 19:59:57.117263   26315 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 19:59:57.117268   26315 kubeadm.go:310] 
	I0930 19:59:57.117377   26315 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 19:59:57.117490   26315 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 19:59:57.117501   26315 kubeadm.go:310] 
	I0930 19:59:57.117607   26315 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p0s0d4.yc45k5nzuh1mipkz \
	I0930 19:59:57.117749   26315 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 19:59:57.117783   26315 kubeadm.go:310] 	--control-plane 
	I0930 19:59:57.117789   26315 kubeadm.go:310] 
	I0930 19:59:57.117912   26315 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 19:59:57.117922   26315 kubeadm.go:310] 
	I0930 19:59:57.117993   26315 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p0s0d4.yc45k5nzuh1mipkz \
	I0930 19:59:57.118080   26315 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 19:59:57.119219   26315 kubeadm.go:310] W0930 19:59:45.871969     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:59:57.119559   26315 kubeadm.go:310] W0930 19:59:45.872918     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:59:57.119653   26315 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 19:59:57.119676   26315 cni.go:84] Creating CNI manager for ""
	I0930 19:59:57.119684   26315 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 19:59:57.121508   26315 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0930 19:59:57.122778   26315 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0930 19:59:57.129018   26315 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0930 19:59:57.129033   26315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0930 19:59:57.148058   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0930 19:59:57.490355   26315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 19:59:57.490415   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:57.490422   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-805293 minikube.k8s.io/updated_at=2024_09_30T19_59_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=ha-805293 minikube.k8s.io/primary=true
	I0930 19:59:57.530433   26315 ops.go:34] apiserver oom_adj: -16
	I0930 19:59:57.632942   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:58.133232   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:58.633968   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:59.133876   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:59.633715   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:00.134062   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:00.633798   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:01.133378   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:01.219465   26315 kubeadm.go:1113] duration metric: took 3.729111543s to wait for elevateKubeSystemPrivileges
	I0930 20:00:01.219521   26315 kubeadm.go:394] duration metric: took 15.584976844s to StartCluster
	I0930 20:00:01.219559   26315 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:01.219656   26315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:00:01.220437   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:01.220719   26315 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:01.220739   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 20:00:01.220750   26315 start.go:241] waiting for startup goroutines ...
	I0930 20:00:01.220771   26315 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 20:00:01.220861   26315 addons.go:69] Setting storage-provisioner=true in profile "ha-805293"
	I0930 20:00:01.220890   26315 addons.go:234] Setting addon storage-provisioner=true in "ha-805293"
	I0930 20:00:01.220907   26315 addons.go:69] Setting default-storageclass=true in profile "ha-805293"
	I0930 20:00:01.220929   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:01.220943   26315 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-805293"
	I0930 20:00:01.220958   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:01.221373   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.221421   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.221455   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.221495   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.237192   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38991
	I0930 20:00:01.237232   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44093
	I0930 20:00:01.237724   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.237776   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.238255   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.238280   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.238371   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.238394   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.238662   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.238738   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.238902   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:01.239184   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.239227   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.241145   26315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:00:01.241484   26315 kapi.go:59] client config for ha-805293: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 20:00:01.242040   26315 cert_rotation.go:140] Starting client certificate rotation controller
	I0930 20:00:01.242321   26315 addons.go:234] Setting addon default-storageclass=true in "ha-805293"
	I0930 20:00:01.242364   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:01.242753   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.242800   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.255454   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34783
	I0930 20:00:01.255998   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.256626   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.256655   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.257008   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.257244   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:01.258602   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38221
	I0930 20:00:01.259101   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.259492   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:01.259705   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.259732   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.260119   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.260656   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.260698   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.261796   26315 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:00:01.263230   26315 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 20:00:01.263251   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 20:00:01.263275   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:01.266511   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.266953   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:01.266979   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.267159   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:01.267342   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:01.267495   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:01.267640   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:01.276774   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42613
	I0930 20:00:01.277256   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.277779   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.277808   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.278167   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.278348   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:01.279998   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:01.280191   26315 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 20:00:01.280204   26315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 20:00:01.280218   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:01.282743   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.283181   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:01.283205   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.283377   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:01.283566   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:01.283719   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:01.283866   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:01.308679   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 20:00:01.431260   26315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 20:00:01.433924   26315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 20:00:01.558490   26315 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0930 20:00:01.621587   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.621614   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.621883   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.621900   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.621908   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.621931   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.621995   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.622217   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.622234   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.622247   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.622328   26315 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 20:00:01.622377   26315 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 20:00:01.622485   26315 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0930 20:00:01.622496   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:01.622504   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:01.622508   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:01.630544   26315 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 20:00:01.631089   26315 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0930 20:00:01.631103   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:01.631110   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:01.631115   26315 round_trippers.go:473]     Content-Type: application/json
	I0930 20:00:01.631119   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:01.636731   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:00:01.636889   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.636905   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.637222   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.637249   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.637227   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.910454   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.910493   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.910790   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.910900   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.910916   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.910928   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.910933   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.911215   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.911245   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.911255   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.913341   26315 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0930 20:00:01.914640   26315 addons.go:510] duration metric: took 693.870653ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0930 20:00:01.914685   26315 start.go:246] waiting for cluster config update ...
	I0930 20:00:01.914700   26315 start.go:255] writing updated cluster config ...
	I0930 20:00:01.917528   26315 out.go:201] 
	I0930 20:00:01.919324   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:01.919441   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:00:01.921983   26315 out.go:177] * Starting "ha-805293-m02" control-plane node in "ha-805293" cluster
	I0930 20:00:01.923837   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:00:01.923877   26315 cache.go:56] Caching tarball of preloaded images
	I0930 20:00:01.924007   26315 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:00:01.924027   26315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:00:01.924140   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:00:01.924406   26315 start.go:360] acquireMachinesLock for ha-805293-m02: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:00:01.924476   26315 start.go:364] duration metric: took 42.723µs to acquireMachinesLock for "ha-805293-m02"
	I0930 20:00:01.924503   26315 start.go:93] Provisioning new machine with config: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:01.924602   26315 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0930 20:00:01.926254   26315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 20:00:01.926373   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.926422   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.942099   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43055
	I0930 20:00:01.942642   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.943165   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.943189   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.943522   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.943810   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:01.943943   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:01.944136   26315 start.go:159] libmachine.API.Create for "ha-805293" (driver="kvm2")
	I0930 20:00:01.944171   26315 client.go:168] LocalClient.Create starting
	I0930 20:00:01.944215   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 20:00:01.944259   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:00:01.944280   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:00:01.944361   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 20:00:01.944395   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:00:01.944410   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:00:01.944433   26315 main.go:141] libmachine: Running pre-create checks...
	I0930 20:00:01.944443   26315 main.go:141] libmachine: (ha-805293-m02) Calling .PreCreateCheck
	I0930 20:00:01.944614   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetConfigRaw
	I0930 20:00:01.945016   26315 main.go:141] libmachine: Creating machine...
	I0930 20:00:01.945030   26315 main.go:141] libmachine: (ha-805293-m02) Calling .Create
	I0930 20:00:01.945196   26315 main.go:141] libmachine: (ha-805293-m02) Creating KVM machine...
	I0930 20:00:01.946629   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found existing default KVM network
	I0930 20:00:01.946731   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found existing private KVM network mk-ha-805293
	I0930 20:00:01.946865   26315 main.go:141] libmachine: (ha-805293-m02) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02 ...
	I0930 20:00:01.946894   26315 main.go:141] libmachine: (ha-805293-m02) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 20:00:01.946988   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:01.946872   26664 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:00:01.947079   26315 main.go:141] libmachine: (ha-805293-m02) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 20:00:02.217368   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:02.217234   26664 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa...
	I0930 20:00:02.510082   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:02.509926   26664 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/ha-805293-m02.rawdisk...
	I0930 20:00:02.510127   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Writing magic tar header
	I0930 20:00:02.510145   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Writing SSH key tar header
	I0930 20:00:02.510158   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:02.510035   26664 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02 ...
	I0930 20:00:02.510175   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02
	I0930 20:00:02.510188   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 20:00:02.510199   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:00:02.510217   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02 (perms=drwx------)
	I0930 20:00:02.510229   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 20:00:02.510240   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 20:00:02.510255   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 20:00:02.510266   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 20:00:02.510281   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 20:00:02.510294   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 20:00:02.510308   26315 main.go:141] libmachine: (ha-805293-m02) Creating domain...
	I0930 20:00:02.510328   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 20:00:02.510352   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins
	I0930 20:00:02.510359   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home
	I0930 20:00:02.510364   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Skipping /home - not owner
	I0930 20:00:02.511282   26315 main.go:141] libmachine: (ha-805293-m02) define libvirt domain using xml: 
	I0930 20:00:02.511306   26315 main.go:141] libmachine: (ha-805293-m02) <domain type='kvm'>
	I0930 20:00:02.511317   26315 main.go:141] libmachine: (ha-805293-m02)   <name>ha-805293-m02</name>
	I0930 20:00:02.511328   26315 main.go:141] libmachine: (ha-805293-m02)   <memory unit='MiB'>2200</memory>
	I0930 20:00:02.511338   26315 main.go:141] libmachine: (ha-805293-m02)   <vcpu>2</vcpu>
	I0930 20:00:02.511348   26315 main.go:141] libmachine: (ha-805293-m02)   <features>
	I0930 20:00:02.511357   26315 main.go:141] libmachine: (ha-805293-m02)     <acpi/>
	I0930 20:00:02.511364   26315 main.go:141] libmachine: (ha-805293-m02)     <apic/>
	I0930 20:00:02.511371   26315 main.go:141] libmachine: (ha-805293-m02)     <pae/>
	I0930 20:00:02.511377   26315 main.go:141] libmachine: (ha-805293-m02)     
	I0930 20:00:02.511388   26315 main.go:141] libmachine: (ha-805293-m02)   </features>
	I0930 20:00:02.511395   26315 main.go:141] libmachine: (ha-805293-m02)   <cpu mode='host-passthrough'>
	I0930 20:00:02.511405   26315 main.go:141] libmachine: (ha-805293-m02)   
	I0930 20:00:02.511416   26315 main.go:141] libmachine: (ha-805293-m02)   </cpu>
	I0930 20:00:02.511444   26315 main.go:141] libmachine: (ha-805293-m02)   <os>
	I0930 20:00:02.511468   26315 main.go:141] libmachine: (ha-805293-m02)     <type>hvm</type>
	I0930 20:00:02.511481   26315 main.go:141] libmachine: (ha-805293-m02)     <boot dev='cdrom'/>
	I0930 20:00:02.511494   26315 main.go:141] libmachine: (ha-805293-m02)     <boot dev='hd'/>
	I0930 20:00:02.511505   26315 main.go:141] libmachine: (ha-805293-m02)     <bootmenu enable='no'/>
	I0930 20:00:02.511512   26315 main.go:141] libmachine: (ha-805293-m02)   </os>
	I0930 20:00:02.511517   26315 main.go:141] libmachine: (ha-805293-m02)   <devices>
	I0930 20:00:02.511535   26315 main.go:141] libmachine: (ha-805293-m02)     <disk type='file' device='cdrom'>
	I0930 20:00:02.511552   26315 main.go:141] libmachine: (ha-805293-m02)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/boot2docker.iso'/>
	I0930 20:00:02.511561   26315 main.go:141] libmachine: (ha-805293-m02)       <target dev='hdc' bus='scsi'/>
	I0930 20:00:02.511591   26315 main.go:141] libmachine: (ha-805293-m02)       <readonly/>
	I0930 20:00:02.511613   26315 main.go:141] libmachine: (ha-805293-m02)     </disk>
	I0930 20:00:02.511630   26315 main.go:141] libmachine: (ha-805293-m02)     <disk type='file' device='disk'>
	I0930 20:00:02.511644   26315 main.go:141] libmachine: (ha-805293-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 20:00:02.511661   26315 main.go:141] libmachine: (ha-805293-m02)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/ha-805293-m02.rawdisk'/>
	I0930 20:00:02.511673   26315 main.go:141] libmachine: (ha-805293-m02)       <target dev='hda' bus='virtio'/>
	I0930 20:00:02.511692   26315 main.go:141] libmachine: (ha-805293-m02)     </disk>
	I0930 20:00:02.511711   26315 main.go:141] libmachine: (ha-805293-m02)     <interface type='network'>
	I0930 20:00:02.511729   26315 main.go:141] libmachine: (ha-805293-m02)       <source network='mk-ha-805293'/>
	I0930 20:00:02.511746   26315 main.go:141] libmachine: (ha-805293-m02)       <model type='virtio'/>
	I0930 20:00:02.511758   26315 main.go:141] libmachine: (ha-805293-m02)     </interface>
	I0930 20:00:02.511769   26315 main.go:141] libmachine: (ha-805293-m02)     <interface type='network'>
	I0930 20:00:02.511784   26315 main.go:141] libmachine: (ha-805293-m02)       <source network='default'/>
	I0930 20:00:02.511795   26315 main.go:141] libmachine: (ha-805293-m02)       <model type='virtio'/>
	I0930 20:00:02.511824   26315 main.go:141] libmachine: (ha-805293-m02)     </interface>
	I0930 20:00:02.511843   26315 main.go:141] libmachine: (ha-805293-m02)     <serial type='pty'>
	I0930 20:00:02.511853   26315 main.go:141] libmachine: (ha-805293-m02)       <target port='0'/>
	I0930 20:00:02.511862   26315 main.go:141] libmachine: (ha-805293-m02)     </serial>
	I0930 20:00:02.511870   26315 main.go:141] libmachine: (ha-805293-m02)     <console type='pty'>
	I0930 20:00:02.511881   26315 main.go:141] libmachine: (ha-805293-m02)       <target type='serial' port='0'/>
	I0930 20:00:02.511892   26315 main.go:141] libmachine: (ha-805293-m02)     </console>
	I0930 20:00:02.511901   26315 main.go:141] libmachine: (ha-805293-m02)     <rng model='virtio'>
	I0930 20:00:02.511910   26315 main.go:141] libmachine: (ha-805293-m02)       <backend model='random'>/dev/random</backend>
	I0930 20:00:02.511924   26315 main.go:141] libmachine: (ha-805293-m02)     </rng>
	I0930 20:00:02.511933   26315 main.go:141] libmachine: (ha-805293-m02)     
	I0930 20:00:02.511939   26315 main.go:141] libmachine: (ha-805293-m02)     
	I0930 20:00:02.511949   26315 main.go:141] libmachine: (ha-805293-m02)   </devices>
	I0930 20:00:02.511958   26315 main.go:141] libmachine: (ha-805293-m02) </domain>
	I0930 20:00:02.511969   26315 main.go:141] libmachine: (ha-805293-m02) 
	I0930 20:00:02.519423   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:35:68:69 in network default
	I0930 20:00:02.520096   26315 main.go:141] libmachine: (ha-805293-m02) Ensuring networks are active...
	I0930 20:00:02.520113   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:02.521080   26315 main.go:141] libmachine: (ha-805293-m02) Ensuring network default is active
	I0930 20:00:02.521471   26315 main.go:141] libmachine: (ha-805293-m02) Ensuring network mk-ha-805293 is active
	I0930 20:00:02.521811   26315 main.go:141] libmachine: (ha-805293-m02) Getting domain xml...
	I0930 20:00:02.522473   26315 main.go:141] libmachine: (ha-805293-m02) Creating domain...
	I0930 20:00:03.765540   26315 main.go:141] libmachine: (ha-805293-m02) Waiting to get IP...
	I0930 20:00:03.766353   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:03.766729   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:03.766750   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:03.766699   26664 retry.go:31] will retry after 241.920356ms: waiting for machine to come up
	I0930 20:00:04.010129   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:04.010801   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:04.010826   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:04.010761   26664 retry.go:31] will retry after 344.430245ms: waiting for machine to come up
	I0930 20:00:04.356311   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:04.356795   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:04.356815   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:04.356767   26664 retry.go:31] will retry after 377.488147ms: waiting for machine to come up
	I0930 20:00:04.736359   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:04.736817   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:04.736839   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:04.736768   26664 retry.go:31] will retry after 400.421105ms: waiting for machine to come up
	I0930 20:00:05.138514   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:05.139019   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:05.139050   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:05.138967   26664 retry.go:31] will retry after 547.144087ms: waiting for machine to come up
	I0930 20:00:05.688116   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:05.688838   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:05.688865   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:05.688769   26664 retry.go:31] will retry after 610.482897ms: waiting for machine to come up
	I0930 20:00:06.301403   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:06.301917   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:06.301945   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:06.301866   26664 retry.go:31] will retry after 792.553977ms: waiting for machine to come up
	I0930 20:00:07.096834   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:07.097300   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:07.097331   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:07.097234   26664 retry.go:31] will retry after 1.20008256s: waiting for machine to come up
	I0930 20:00:08.299714   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:08.300169   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:08.300191   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:08.300137   26664 retry.go:31] will retry after 1.678792143s: waiting for machine to come up
	I0930 20:00:09.980216   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:09.980657   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:09.980685   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:09.980618   26664 retry.go:31] will retry after 2.098959289s: waiting for machine to come up
	I0930 20:00:12.080886   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:12.081433   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:12.081474   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:12.081377   26664 retry.go:31] will retry after 2.748866897s: waiting for machine to come up
	I0930 20:00:14.833188   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:14.833722   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:14.833748   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:14.833682   26664 retry.go:31] will retry after 2.379918836s: waiting for machine to come up
	I0930 20:00:17.215678   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:17.216060   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:17.216093   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:17.215999   26664 retry.go:31] will retry after 4.355514313s: waiting for machine to come up
	I0930 20:00:21.576523   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.577032   26315 main.go:141] libmachine: (ha-805293-m02) Found IP for machine: 192.168.39.220
	I0930 20:00:21.577053   26315 main.go:141] libmachine: (ha-805293-m02) Reserving static IP address...
	I0930 20:00:21.577065   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has current primary IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.577388   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find host DHCP lease matching {name: "ha-805293-m02", mac: "52:54:00:fe:f4:56", ip: "192.168.39.220"} in network mk-ha-805293
	I0930 20:00:21.655408   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Getting to WaitForSSH function...
	I0930 20:00:21.655444   26315 main.go:141] libmachine: (ha-805293-m02) Reserved static IP address: 192.168.39.220
	I0930 20:00:21.655509   26315 main.go:141] libmachine: (ha-805293-m02) Waiting for SSH to be available...
	I0930 20:00:21.658005   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.658453   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:21.658491   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.658732   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Using SSH client type: external
	I0930 20:00:21.658759   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa (-rw-------)
	I0930 20:00:21.658792   26315 main.go:141] libmachine: (ha-805293-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 20:00:21.658808   26315 main.go:141] libmachine: (ha-805293-m02) DBG | About to run SSH command:
	I0930 20:00:21.658825   26315 main.go:141] libmachine: (ha-805293-m02) DBG | exit 0
	I0930 20:00:21.787681   26315 main.go:141] libmachine: (ha-805293-m02) DBG | SSH cmd err, output: <nil>: 
	I0930 20:00:21.788011   26315 main.go:141] libmachine: (ha-805293-m02) KVM machine creation complete!
	I0930 20:00:21.788252   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetConfigRaw
	I0930 20:00:21.788786   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:21.788970   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:21.789203   26315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 20:00:21.789220   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetState
	I0930 20:00:21.790562   26315 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 20:00:21.790578   26315 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 20:00:21.790584   26315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 20:00:21.790592   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:21.792832   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.793247   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:21.793275   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.793444   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:21.793624   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.793794   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.793936   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:21.794099   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:21.794370   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:21.794384   26315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 20:00:21.906923   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:00:21.906949   26315 main.go:141] libmachine: Detecting the provisioner...
	I0930 20:00:21.906961   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:21.910153   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.910565   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:21.910596   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.910764   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:21.910979   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.911241   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.911375   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:21.911534   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:21.911713   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:21.911726   26315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 20:00:22.024080   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 20:00:22.024153   26315 main.go:141] libmachine: found compatible host: buildroot
	I0930 20:00:22.024160   26315 main.go:141] libmachine: Provisioning with buildroot...
	I0930 20:00:22.024170   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:22.024471   26315 buildroot.go:166] provisioning hostname "ha-805293-m02"
	I0930 20:00:22.024504   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:22.024708   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.027328   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.027816   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.027846   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.028043   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.028244   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.028415   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.028559   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.028711   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.028924   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.028951   26315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293-m02 && echo "ha-805293-m02" | sudo tee /etc/hostname
	I0930 20:00:22.153517   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293-m02
	
	I0930 20:00:22.153558   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.156342   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.156867   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.156892   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.157066   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.157250   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.157398   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.157520   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.157658   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.157834   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.157856   26315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:00:22.280453   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:00:22.280490   26315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:00:22.280513   26315 buildroot.go:174] setting up certificates
	I0930 20:00:22.280524   26315 provision.go:84] configureAuth start
	I0930 20:00:22.280537   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:22.280873   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:22.283731   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.284096   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.284121   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.284311   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.286698   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.287078   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.287108   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.287262   26315 provision.go:143] copyHostCerts
	I0930 20:00:22.287296   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:00:22.287337   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:00:22.287351   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:00:22.287424   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:00:22.287503   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:00:22.287521   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:00:22.287557   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:00:22.287594   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:00:22.287648   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:00:22.287664   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:00:22.287668   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:00:22.287689   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:00:22.287737   26315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293-m02 san=[127.0.0.1 192.168.39.220 ha-805293-m02 localhost minikube]
	I0930 20:00:22.355076   26315 provision.go:177] copyRemoteCerts
	I0930 20:00:22.355131   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:00:22.355153   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.357993   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.358290   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.358317   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.358695   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.358872   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.358992   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.359090   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:22.445399   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 20:00:22.445470   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:00:22.469429   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 20:00:22.469516   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 20:00:22.492675   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 20:00:22.492763   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 20:00:22.515601   26315 provision.go:87] duration metric: took 235.062596ms to configureAuth
	I0930 20:00:22.515633   26315 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:00:22.515833   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:22.515926   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.518627   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.519062   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.519101   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.519248   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.519447   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.519617   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.519768   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.519918   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.520077   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.520090   26315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:00:22.744066   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:00:22.744092   26315 main.go:141] libmachine: Checking connection to Docker...
	I0930 20:00:22.744101   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetURL
	I0930 20:00:22.745446   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Using libvirt version 6000000
	I0930 20:00:22.747635   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.748132   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.748161   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.748303   26315 main.go:141] libmachine: Docker is up and running!
	I0930 20:00:22.748319   26315 main.go:141] libmachine: Reticulating splines...
	I0930 20:00:22.748327   26315 client.go:171] duration metric: took 20.804148382s to LocalClient.Create
	I0930 20:00:22.748348   26315 start.go:167] duration metric: took 20.804213197s to libmachine.API.Create "ha-805293"
	I0930 20:00:22.748357   26315 start.go:293] postStartSetup for "ha-805293-m02" (driver="kvm2")
	I0930 20:00:22.748367   26315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:00:22.748386   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:22.748624   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:00:22.748654   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.750830   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.751166   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.751190   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.751299   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.751468   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.751612   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.751720   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:22.837496   26315 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:00:22.841510   26315 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:00:22.841546   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:00:22.841623   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:00:22.841717   26315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:00:22.841730   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 20:00:22.841843   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:00:22.851144   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:00:22.877058   26315 start.go:296] duration metric: took 128.687557ms for postStartSetup
	I0930 20:00:22.877104   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetConfigRaw
	I0930 20:00:22.877761   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:22.880570   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.880908   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.880931   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.881333   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:00:22.881547   26315 start.go:128] duration metric: took 20.956931205s to createHost
	I0930 20:00:22.881569   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.883882   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.884228   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.884246   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.884419   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.884601   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.884779   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.884913   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.885087   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.885252   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.885264   26315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:00:23.000299   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727726422.960119850
	
	I0930 20:00:23.000326   26315 fix.go:216] guest clock: 1727726422.960119850
	I0930 20:00:23.000338   26315 fix.go:229] Guest: 2024-09-30 20:00:22.96011985 +0000 UTC Remote: 2024-09-30 20:00:22.881558413 +0000 UTC m=+66.452648359 (delta=78.561437ms)
	I0930 20:00:23.000357   26315 fix.go:200] guest clock delta is within tolerance: 78.561437ms
	I0930 20:00:23.000364   26315 start.go:83] releasing machines lock for "ha-805293-m02", held for 21.075876017s
	I0930 20:00:23.000382   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.000682   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:23.003439   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.003855   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:23.003882   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.006309   26315 out.go:177] * Found network options:
	I0930 20:00:23.008016   26315 out.go:177]   - NO_PROXY=192.168.39.3
	W0930 20:00:23.009484   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:00:23.009519   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.010257   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.010450   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.010558   26315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:00:23.010606   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	W0930 20:00:23.010646   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:00:23.010724   26315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:00:23.010747   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:23.013581   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.013752   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.013960   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:23.013983   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.014161   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:23.014186   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:23.014187   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.014404   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:23.014410   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:23.014563   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:23.014595   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:23.014659   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:23.014695   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:23.014791   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:23.259199   26315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:00:23.264710   26315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:00:23.264772   26315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:00:23.281650   26315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 20:00:23.281678   26315 start.go:495] detecting cgroup driver to use...
	I0930 20:00:23.281745   26315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:00:23.300954   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:00:23.318197   26315 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:00:23.318266   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:00:23.334729   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:00:23.351325   26315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:00:23.494840   26315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:00:23.659365   26315 docker.go:233] disabling docker service ...
	I0930 20:00:23.659442   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:00:23.673200   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:00:23.686244   26315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:00:23.816616   26315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:00:23.949421   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:00:23.963035   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:00:23.981793   26315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:00:23.981869   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:23.992506   26315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:00:23.992572   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.003215   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.013791   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.024890   26315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:00:24.036504   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.046845   26315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.063744   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.074710   26315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:00:24.084399   26315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 20:00:24.084456   26315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 20:00:24.097779   26315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:00:24.107679   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:00:24.245414   26315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:00:24.332691   26315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:00:24.332763   26315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:00:24.337609   26315 start.go:563] Will wait 60s for crictl version
	I0930 20:00:24.337672   26315 ssh_runner.go:195] Run: which crictl
	I0930 20:00:24.341369   26315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:00:24.379294   26315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:00:24.379384   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:00:24.407964   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:00:24.438040   26315 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:00:24.439799   26315 out.go:177]   - env NO_PROXY=192.168.39.3
	I0930 20:00:24.441127   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:24.443641   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:24.443999   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:24.444023   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:24.444256   26315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:00:24.448441   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:00:24.460479   26315 mustload.go:65] Loading cluster: ha-805293
	I0930 20:00:24.460673   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:24.460911   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:24.460946   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:24.475845   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0930 20:00:24.476505   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:24.476991   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:24.477013   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:24.477336   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:24.477545   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:24.479156   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:24.479566   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:24.479614   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:24.494163   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I0930 20:00:24.494690   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:24.495134   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:24.495156   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:24.495462   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:24.495672   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:24.495840   26315 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.220
	I0930 20:00:24.495854   26315 certs.go:194] generating shared ca certs ...
	I0930 20:00:24.495872   26315 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:24.495990   26315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:00:24.496030   26315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:00:24.496038   26315 certs.go:256] generating profile certs ...
	I0930 20:00:24.496099   26315 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 20:00:24.496121   26315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032
	I0930 20:00:24.496134   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.220 192.168.39.254]
	I0930 20:00:24.563341   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032 ...
	I0930 20:00:24.563370   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032: {Name:mk8534a0b1f65471035122400012ca9f075cb68b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:24.563553   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032 ...
	I0930 20:00:24.563580   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032: {Name:mkdff9b5cf02688bad7cef701430e9d45f427c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:24.563669   26315 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 20:00:24.563804   26315 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 20:00:24.563922   26315 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 20:00:24.563935   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 20:00:24.563949   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 20:00:24.563961   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 20:00:24.563971   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 20:00:24.563981   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 20:00:24.563992   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 20:00:24.564001   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 20:00:24.564012   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 20:00:24.564058   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:00:24.564087   26315 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:00:24.564096   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:00:24.564116   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:00:24.564137   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:00:24.564157   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:00:24.564196   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:00:24.564221   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 20:00:24.564233   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 20:00:24.564246   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:24.564276   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:24.567674   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:24.568209   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:24.568244   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:24.568458   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:24.568679   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:24.568859   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:24.569017   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:24.647988   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 20:00:24.652578   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 20:00:24.663570   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 20:00:24.667502   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 20:00:24.678300   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 20:00:24.682636   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 20:00:24.692556   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 20:00:24.697407   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0930 20:00:24.708600   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 20:00:24.716272   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 20:00:24.726239   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 20:00:24.730151   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0930 20:00:24.740007   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:00:24.764135   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:00:24.787511   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:00:24.811921   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:00:24.835050   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0930 20:00:24.858111   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 20:00:24.881164   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:00:24.905084   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 20:00:24.930204   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:00:24.954976   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:00:24.979893   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:00:25.004028   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 20:00:25.020509   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 20:00:25.037112   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 20:00:25.053614   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0930 20:00:25.069699   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 20:00:25.087062   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0930 20:00:25.103141   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 20:00:25.119089   26315 ssh_runner.go:195] Run: openssl version
	I0930 20:00:25.124587   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:00:25.135122   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:00:25.139645   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:00:25.139709   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:00:25.145556   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:00:25.156636   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:00:25.167339   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:00:25.171719   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:00:25.171780   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:00:25.177212   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:00:25.188055   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:00:25.199114   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:25.203444   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:25.203514   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:25.209227   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:00:25.220164   26315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:00:25.224532   26315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 20:00:25.224591   26315 kubeadm.go:934] updating node {m02 192.168.39.220 8443 v1.31.1 crio true true} ...
	I0930 20:00:25.224694   26315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:00:25.224719   26315 kube-vip.go:115] generating kube-vip config ...
	I0930 20:00:25.224757   26315 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 20:00:25.242207   26315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 20:00:25.242306   26315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 20:00:25.242370   26315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:00:25.253224   26315 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 20:00:25.253326   26315 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 20:00:25.264511   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 20:00:25.264547   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:00:25.264590   26315 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0930 20:00:25.264606   26315 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0930 20:00:25.264613   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:00:25.269385   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 20:00:25.269423   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 20:00:26.288255   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:00:26.288359   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:00:26.293355   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 20:00:26.293391   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 20:00:26.370842   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:00:26.408125   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:00:26.408233   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:00:26.414764   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 20:00:26.414804   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 20:00:26.848584   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 20:00:26.858015   26315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 20:00:26.874053   26315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:00:26.890616   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 20:00:26.906680   26315 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 20:00:26.910431   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:00:26.921656   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:00:27.039123   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:00:27.056773   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:27.057124   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:27.057173   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:27.072237   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I0930 20:00:27.072852   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:27.073292   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:27.073321   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:27.073651   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:27.073859   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:27.073989   26315 start.go:317] joinCluster: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:00:27.074091   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 20:00:27.074108   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:27.076745   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:27.077111   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:27.077130   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:27.077207   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:27.077370   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:27.077633   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:27.077784   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:27.230308   26315 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:27.230355   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cnuzai.6xkseww2aia5hxhb --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m02 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443"
	I0930 20:00:50.312960   26315 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cnuzai.6xkseww2aia5hxhb --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m02 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443": (23.082567099s)
	I0930 20:00:50.313004   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 20:00:50.837990   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-805293-m02 minikube.k8s.io/updated_at=2024_09_30T20_00_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=ha-805293 minikube.k8s.io/primary=false
	I0930 20:00:50.975697   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-805293-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 20:00:51.102316   26315 start.go:319] duration metric: took 24.028319202s to joinCluster
	I0930 20:00:51.102444   26315 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:51.102695   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:51.104462   26315 out.go:177] * Verifying Kubernetes components...
	I0930 20:00:51.105980   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:00:51.368169   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:00:51.414670   26315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:00:51.415012   26315 kapi.go:59] client config for ha-805293: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 20:00:51.415098   26315 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.3:8443
	I0930 20:00:51.415444   26315 node_ready.go:35] waiting up to 6m0s for node "ha-805293-m02" to be "Ready" ...
	I0930 20:00:51.415604   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:51.415616   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:51.415627   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:51.415634   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:51.426106   26315 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 20:00:51.915725   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:51.915750   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:51.915764   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:51.915771   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:51.920139   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:52.416072   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:52.416092   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:52.416100   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:52.416104   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:52.419738   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:52.915687   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:52.915720   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:52.915733   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:52.915739   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:52.920070   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:53.415992   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:53.416013   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:53.416021   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:53.416027   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:53.419709   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:53.420257   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:00:53.915641   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:53.915662   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:53.915670   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:53.915675   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:53.918936   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:54.415947   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:54.415969   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:54.415978   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:54.415983   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:54.419470   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:54.916559   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:54.916594   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:54.916604   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:54.916609   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:54.920769   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:55.415723   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:55.415749   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:55.415760   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:55.415767   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:55.419960   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:55.420655   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:00:55.915703   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:55.915725   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:55.915732   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:55.915737   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:55.918792   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:56.415726   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:56.415759   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:56.415768   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:56.415771   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:56.419845   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:56.915720   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:56.915749   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:56.915761   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:56.915768   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:56.919114   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:57.415890   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:57.415920   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:57.415930   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:57.415936   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:57.419326   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:57.916001   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:57.916024   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:57.916032   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:57.916036   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:57.919385   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:57.920066   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:00:58.416036   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:58.416058   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:58.416066   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:58.416071   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:58.444113   26315 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0930 20:00:58.915821   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:58.915851   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:58.915865   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:58.915872   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:58.919943   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:59.415861   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:59.415883   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:59.415892   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:59.415896   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:59.419554   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:59.916644   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:59.916665   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:59.916673   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:59.916681   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:59.920228   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:59.920834   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:00.415729   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:00.415764   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:00.415772   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:00.415777   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:00.419232   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:00.915725   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:00.915748   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:00.915758   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:00.915764   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:00.920882   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:01:01.416215   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:01.416240   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:01.416249   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:01.416252   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:01.419889   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:01.916651   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:01.916673   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:01.916680   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:01.916686   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:01.920422   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:01.920906   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:02.416417   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:02.416447   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:02.416458   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:02.416465   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:02.420384   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:02.916614   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:02.916639   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:02.916647   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:02.916651   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:02.920435   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:03.416222   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:03.416246   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:03.416255   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:03.416258   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:03.419787   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:03.915698   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:03.915726   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:03.915735   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:03.915739   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:03.919427   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:04.415764   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:04.415788   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:04.415797   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:04.415801   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:04.419012   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:04.419574   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:04.915824   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:04.915846   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:04.915855   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:04.915859   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:04.920091   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:05.415756   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:05.415780   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:05.415787   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:05.415791   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:05.421271   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:01:05.915718   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:05.915739   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:05.915747   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:05.915751   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:05.919141   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:06.415741   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:06.415762   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:06.415770   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:06.415774   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:06.418886   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:06.419650   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:06.916104   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:06.916133   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:06.916144   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:06.916149   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:06.919406   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:07.416605   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:07.416630   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:07.416639   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:07.416646   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:07.419940   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:07.915753   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:07.915780   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:07.915790   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:07.915795   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:07.919449   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:08.416606   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:08.416630   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:08.416638   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:08.416643   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:08.420794   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:08.421339   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:08.915715   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:08.915738   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:08.915746   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:08.915752   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:08.919389   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.416586   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:09.416611   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.416621   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.416628   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.419914   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.916640   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:09.916661   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.916669   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.916673   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.919743   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.920355   26315 node_ready.go:49] node "ha-805293-m02" has status "Ready":"True"
	I0930 20:01:09.920385   26315 node_ready.go:38] duration metric: took 18.504913608s for node "ha-805293-m02" to be "Ready" ...
	I0930 20:01:09.920395   26315 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:01:09.920461   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:09.920470   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.920477   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.920481   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.924944   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:09.930623   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.930723   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-x7zjp
	I0930 20:01:09.930731   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.930739   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.930743   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.933787   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.934467   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:09.934486   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.934497   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.934502   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.936935   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.937372   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.937389   26315 pod_ready.go:82] duration metric: took 6.738618ms for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.937399   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.937452   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-z4bkv
	I0930 20:01:09.937460   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.937467   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.937471   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.939718   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.940345   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:09.940360   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.940367   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.940372   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.942825   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.943347   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.943362   26315 pod_ready.go:82] duration metric: took 5.957941ms for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.943374   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.943449   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293
	I0930 20:01:09.943477   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.943493   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.943502   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.946145   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.946815   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:09.946829   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.946837   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.946841   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.949619   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.950200   26315 pod_ready.go:93] pod "etcd-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.950222   26315 pod_ready.go:82] duration metric: took 6.836708ms for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.950233   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.950305   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m02
	I0930 20:01:09.950326   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.950334   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.950340   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.953306   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.953792   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:09.953806   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.953813   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.953817   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.956400   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.956812   26315 pod_ready.go:93] pod "etcd-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.956829   26315 pod_ready.go:82] duration metric: took 6.588184ms for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.956845   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.117233   26315 request.go:632] Waited for 160.320722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:01:10.117300   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:01:10.117306   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.117318   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.117324   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.120940   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.317057   26315 request.go:632] Waited for 195.415809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:10.317127   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:10.317135   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.317156   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.317180   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.320648   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.321373   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:10.321392   26315 pod_ready.go:82] duration metric: took 364.537566ms for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.321402   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.517507   26315 request.go:632] Waited for 196.023112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:01:10.517576   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:01:10.517583   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.517594   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.517601   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.521299   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.717299   26315 request.go:632] Waited for 195.382491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:10.717366   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:10.717372   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.717379   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.717384   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.720883   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.721468   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:10.721488   26315 pod_ready.go:82] duration metric: took 400.07752ms for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.721497   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.917490   26315 request.go:632] Waited for 195.929177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:01:10.917554   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:01:10.917574   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.917606   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.917617   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.921610   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.116693   26315 request.go:632] Waited for 194.297174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.116753   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.116759   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.116766   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.116769   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.120537   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.121044   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:11.121062   26315 pod_ready.go:82] duration metric: took 399.55959ms for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.121074   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.317266   26315 request.go:632] Waited for 196.133826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:01:11.317335   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:01:11.317342   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.317351   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.317358   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.321265   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.517020   26315 request.go:632] Waited for 195.154322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:11.517082   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:11.517089   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.517098   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.517103   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.520779   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.521296   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:11.521319   26315 pod_ready.go:82] duration metric: took 400.238082ms for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.521335   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.716800   26315 request.go:632] Waited for 195.390285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:01:11.716888   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:01:11.716896   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.716906   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.716911   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.720246   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.917422   26315 request.go:632] Waited for 196.372605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.917500   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.917508   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.917518   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.917526   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.921353   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.921887   26315 pod_ready.go:93] pod "kube-proxy-6gnt4" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:11.921912   26315 pod_ready.go:82] duration metric: took 400.568991ms for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.921925   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.116927   26315 request.go:632] Waited for 194.932043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:01:12.117009   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:01:12.117015   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.117022   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.117026   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.121372   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:12.317480   26315 request.go:632] Waited for 195.395103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:12.317541   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:12.317546   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.317553   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.317556   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.321223   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:12.321777   26315 pod_ready.go:93] pod "kube-proxy-vptrg" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:12.321796   26315 pod_ready.go:82] duration metric: took 399.864157ms for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.321806   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.516927   26315 request.go:632] Waited for 195.058252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:01:12.517009   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:01:12.517015   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.517022   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.517029   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.520681   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:12.717635   26315 request.go:632] Waited for 196.390201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:12.717694   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:12.717698   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.717706   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.717714   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.721311   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:12.721886   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:12.721903   26315 pod_ready.go:82] duration metric: took 400.091381ms for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.721913   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.917094   26315 request.go:632] Waited for 195.106579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:01:12.917184   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:01:12.917193   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.917203   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.917212   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.921090   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:13.117142   26315 request.go:632] Waited for 195.345819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:13.117216   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:13.117221   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.117229   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.117232   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.120777   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:13.121215   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:13.121232   26315 pod_ready.go:82] duration metric: took 399.313081ms for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:13.121242   26315 pod_ready.go:39] duration metric: took 3.200834368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:01:13.121266   26315 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:01:13.121324   26315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:01:13.137767   26315 api_server.go:72] duration metric: took 22.035280113s to wait for apiserver process to appear ...
	I0930 20:01:13.137797   26315 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:01:13.137828   26315 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0930 20:01:13.141994   26315 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I0930 20:01:13.142067   26315 round_trippers.go:463] GET https://192.168.39.3:8443/version
	I0930 20:01:13.142074   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.142082   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.142090   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.142859   26315 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 20:01:13.142975   26315 api_server.go:141] control plane version: v1.31.1
	I0930 20:01:13.142993   26315 api_server.go:131] duration metric: took 5.190596ms to wait for apiserver health ...
	I0930 20:01:13.143001   26315 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:01:13.317422   26315 request.go:632] Waited for 174.359049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.317472   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.317478   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.317484   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.317488   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.321962   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:13.326370   26315 system_pods.go:59] 17 kube-system pods found
	I0930 20:01:13.326406   26315 system_pods.go:61] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:01:13.326411   26315 system_pods.go:61] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:01:13.326415   26315 system_pods.go:61] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:01:13.326420   26315 system_pods.go:61] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:01:13.326425   26315 system_pods.go:61] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:01:13.326429   26315 system_pods.go:61] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:01:13.326432   26315 system_pods.go:61] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:01:13.326435   26315 system_pods.go:61] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:01:13.326438   26315 system_pods.go:61] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:01:13.326442   26315 system_pods.go:61] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:01:13.326445   26315 system_pods.go:61] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:01:13.326448   26315 system_pods.go:61] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:01:13.326451   26315 system_pods.go:61] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:01:13.326454   26315 system_pods.go:61] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:01:13.326457   26315 system_pods.go:61] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:01:13.326459   26315 system_pods.go:61] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:01:13.326462   26315 system_pods.go:61] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:01:13.326467   26315 system_pods.go:74] duration metric: took 183.46129ms to wait for pod list to return data ...
	I0930 20:01:13.326477   26315 default_sa.go:34] waiting for default service account to be created ...
	I0930 20:01:13.516843   26315 request.go:632] Waited for 190.295336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:01:13.516914   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:01:13.516919   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.516926   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.516929   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.520919   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:13.521167   26315 default_sa.go:45] found service account: "default"
	I0930 20:01:13.521184   26315 default_sa.go:55] duration metric: took 194.701824ms for default service account to be created ...
	I0930 20:01:13.521193   26315 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 20:01:13.717380   26315 request.go:632] Waited for 196.119354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.717451   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.717458   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.717467   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.717471   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.722690   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:01:13.727139   26315 system_pods.go:86] 17 kube-system pods found
	I0930 20:01:13.727168   26315 system_pods.go:89] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:01:13.727174   26315 system_pods.go:89] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:01:13.727179   26315 system_pods.go:89] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:01:13.727184   26315 system_pods.go:89] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:01:13.727188   26315 system_pods.go:89] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:01:13.727193   26315 system_pods.go:89] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:01:13.727198   26315 system_pods.go:89] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:01:13.727204   26315 system_pods.go:89] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:01:13.727209   26315 system_pods.go:89] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:01:13.727217   26315 system_pods.go:89] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:01:13.727230   26315 system_pods.go:89] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:01:13.727235   26315 system_pods.go:89] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:01:13.727241   26315 system_pods.go:89] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:01:13.727247   26315 system_pods.go:89] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:01:13.727252   26315 system_pods.go:89] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:01:13.727257   26315 system_pods.go:89] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:01:13.727261   26315 system_pods.go:89] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:01:13.727270   26315 system_pods.go:126] duration metric: took 206.072644ms to wait for k8s-apps to be running ...
	I0930 20:01:13.727277   26315 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 20:01:13.727327   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:01:13.741981   26315 system_svc.go:56] duration metric: took 14.693769ms WaitForService to wait for kubelet
	I0930 20:01:13.742010   26315 kubeadm.go:582] duration metric: took 22.639532003s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:01:13.742027   26315 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:01:13.917345   26315 request.go:632] Waited for 175.232926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes
	I0930 20:01:13.917397   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I0930 20:01:13.917402   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.917410   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.917413   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.921853   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:13.922642   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:01:13.922674   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:01:13.922690   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:01:13.922694   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:01:13.922699   26315 node_conditions.go:105] duration metric: took 180.667513ms to run NodePressure ...
	I0930 20:01:13.922708   26315 start.go:241] waiting for startup goroutines ...
	I0930 20:01:13.922733   26315 start.go:255] writing updated cluster config ...
	I0930 20:01:13.925048   26315 out.go:201] 
	I0930 20:01:13.926843   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:01:13.926954   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:01:13.928893   26315 out.go:177] * Starting "ha-805293-m03" control-plane node in "ha-805293" cluster
	I0930 20:01:13.930308   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:01:13.930336   26315 cache.go:56] Caching tarball of preloaded images
	I0930 20:01:13.930467   26315 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:01:13.930485   26315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:01:13.930582   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:01:13.930765   26315 start.go:360] acquireMachinesLock for ha-805293-m03: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:01:13.930817   26315 start.go:364] duration metric: took 28.082µs to acquireMachinesLock for "ha-805293-m03"
	I0930 20:01:13.930836   26315 start.go:93] Provisioning new machine with config: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:01:13.930923   26315 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0930 20:01:13.932766   26315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 20:01:13.932890   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:13.932929   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:13.949248   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
	I0930 20:01:13.949763   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:13.950280   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:13.950304   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:13.950634   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:13.950970   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:13.951189   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:13.951448   26315 start.go:159] libmachine.API.Create for "ha-805293" (driver="kvm2")
	I0930 20:01:13.951489   26315 client.go:168] LocalClient.Create starting
	I0930 20:01:13.951565   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 20:01:13.951611   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:01:13.951631   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:01:13.951696   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 20:01:13.951724   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:01:13.951742   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:01:13.951770   26315 main.go:141] libmachine: Running pre-create checks...
	I0930 20:01:13.951780   26315 main.go:141] libmachine: (ha-805293-m03) Calling .PreCreateCheck
	I0930 20:01:13.951958   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetConfigRaw
	I0930 20:01:13.952389   26315 main.go:141] libmachine: Creating machine...
	I0930 20:01:13.952404   26315 main.go:141] libmachine: (ha-805293-m03) Calling .Create
	I0930 20:01:13.952539   26315 main.go:141] libmachine: (ha-805293-m03) Creating KVM machine...
	I0930 20:01:13.953896   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found existing default KVM network
	I0930 20:01:13.954082   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found existing private KVM network mk-ha-805293
	I0930 20:01:13.954276   26315 main.go:141] libmachine: (ha-805293-m03) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03 ...
	I0930 20:01:13.954303   26315 main.go:141] libmachine: (ha-805293-m03) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 20:01:13.954425   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:13.954267   27054 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:01:13.954521   26315 main.go:141] libmachine: (ha-805293-m03) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 20:01:14.186819   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:14.186689   27054 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa...
	I0930 20:01:14.467265   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:14.467127   27054 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/ha-805293-m03.rawdisk...
	I0930 20:01:14.467311   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Writing magic tar header
	I0930 20:01:14.467327   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Writing SSH key tar header
	I0930 20:01:14.467340   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:14.467280   27054 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03 ...
	I0930 20:01:14.467434   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03
	I0930 20:01:14.467495   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03 (perms=drwx------)
	I0930 20:01:14.467509   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 20:01:14.467520   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:01:14.467545   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 20:01:14.467563   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 20:01:14.467577   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 20:01:14.467590   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 20:01:14.467603   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins
	I0930 20:01:14.467614   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home
	I0930 20:01:14.467622   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Skipping /home - not owner
	I0930 20:01:14.467636   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 20:01:14.467659   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 20:01:14.467677   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 20:01:14.467702   26315 main.go:141] libmachine: (ha-805293-m03) Creating domain...
	I0930 20:01:14.468847   26315 main.go:141] libmachine: (ha-805293-m03) define libvirt domain using xml: 
	I0930 20:01:14.468871   26315 main.go:141] libmachine: (ha-805293-m03) <domain type='kvm'>
	I0930 20:01:14.468881   26315 main.go:141] libmachine: (ha-805293-m03)   <name>ha-805293-m03</name>
	I0930 20:01:14.468899   26315 main.go:141] libmachine: (ha-805293-m03)   <memory unit='MiB'>2200</memory>
	I0930 20:01:14.468932   26315 main.go:141] libmachine: (ha-805293-m03)   <vcpu>2</vcpu>
	I0930 20:01:14.468950   26315 main.go:141] libmachine: (ha-805293-m03)   <features>
	I0930 20:01:14.468968   26315 main.go:141] libmachine: (ha-805293-m03)     <acpi/>
	I0930 20:01:14.468978   26315 main.go:141] libmachine: (ha-805293-m03)     <apic/>
	I0930 20:01:14.469001   26315 main.go:141] libmachine: (ha-805293-m03)     <pae/>
	I0930 20:01:14.469014   26315 main.go:141] libmachine: (ha-805293-m03)     
	I0930 20:01:14.469041   26315 main.go:141] libmachine: (ha-805293-m03)   </features>
	I0930 20:01:14.469062   26315 main.go:141] libmachine: (ha-805293-m03)   <cpu mode='host-passthrough'>
	I0930 20:01:14.469074   26315 main.go:141] libmachine: (ha-805293-m03)   
	I0930 20:01:14.469080   26315 main.go:141] libmachine: (ha-805293-m03)   </cpu>
	I0930 20:01:14.469091   26315 main.go:141] libmachine: (ha-805293-m03)   <os>
	I0930 20:01:14.469107   26315 main.go:141] libmachine: (ha-805293-m03)     <type>hvm</type>
	I0930 20:01:14.469115   26315 main.go:141] libmachine: (ha-805293-m03)     <boot dev='cdrom'/>
	I0930 20:01:14.469124   26315 main.go:141] libmachine: (ha-805293-m03)     <boot dev='hd'/>
	I0930 20:01:14.469143   26315 main.go:141] libmachine: (ha-805293-m03)     <bootmenu enable='no'/>
	I0930 20:01:14.469154   26315 main.go:141] libmachine: (ha-805293-m03)   </os>
	I0930 20:01:14.469164   26315 main.go:141] libmachine: (ha-805293-m03)   <devices>
	I0930 20:01:14.469248   26315 main.go:141] libmachine: (ha-805293-m03)     <disk type='file' device='cdrom'>
	I0930 20:01:14.469284   26315 main.go:141] libmachine: (ha-805293-m03)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/boot2docker.iso'/>
	I0930 20:01:14.469299   26315 main.go:141] libmachine: (ha-805293-m03)       <target dev='hdc' bus='scsi'/>
	I0930 20:01:14.469305   26315 main.go:141] libmachine: (ha-805293-m03)       <readonly/>
	I0930 20:01:14.469314   26315 main.go:141] libmachine: (ha-805293-m03)     </disk>
	I0930 20:01:14.469321   26315 main.go:141] libmachine: (ha-805293-m03)     <disk type='file' device='disk'>
	I0930 20:01:14.469350   26315 main.go:141] libmachine: (ha-805293-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 20:01:14.469366   26315 main.go:141] libmachine: (ha-805293-m03)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/ha-805293-m03.rawdisk'/>
	I0930 20:01:14.469381   26315 main.go:141] libmachine: (ha-805293-m03)       <target dev='hda' bus='virtio'/>
	I0930 20:01:14.469387   26315 main.go:141] libmachine: (ha-805293-m03)     </disk>
	I0930 20:01:14.469400   26315 main.go:141] libmachine: (ha-805293-m03)     <interface type='network'>
	I0930 20:01:14.469410   26315 main.go:141] libmachine: (ha-805293-m03)       <source network='mk-ha-805293'/>
	I0930 20:01:14.469421   26315 main.go:141] libmachine: (ha-805293-m03)       <model type='virtio'/>
	I0930 20:01:14.469427   26315 main.go:141] libmachine: (ha-805293-m03)     </interface>
	I0930 20:01:14.469437   26315 main.go:141] libmachine: (ha-805293-m03)     <interface type='network'>
	I0930 20:01:14.469456   26315 main.go:141] libmachine: (ha-805293-m03)       <source network='default'/>
	I0930 20:01:14.469482   26315 main.go:141] libmachine: (ha-805293-m03)       <model type='virtio'/>
	I0930 20:01:14.469512   26315 main.go:141] libmachine: (ha-805293-m03)     </interface>
	I0930 20:01:14.469521   26315 main.go:141] libmachine: (ha-805293-m03)     <serial type='pty'>
	I0930 20:01:14.469540   26315 main.go:141] libmachine: (ha-805293-m03)       <target port='0'/>
	I0930 20:01:14.469572   26315 main.go:141] libmachine: (ha-805293-m03)     </serial>
	I0930 20:01:14.469589   26315 main.go:141] libmachine: (ha-805293-m03)     <console type='pty'>
	I0930 20:01:14.469603   26315 main.go:141] libmachine: (ha-805293-m03)       <target type='serial' port='0'/>
	I0930 20:01:14.469614   26315 main.go:141] libmachine: (ha-805293-m03)     </console>
	I0930 20:01:14.469623   26315 main.go:141] libmachine: (ha-805293-m03)     <rng model='virtio'>
	I0930 20:01:14.469631   26315 main.go:141] libmachine: (ha-805293-m03)       <backend model='random'>/dev/random</backend>
	I0930 20:01:14.469642   26315 main.go:141] libmachine: (ha-805293-m03)     </rng>
	I0930 20:01:14.469648   26315 main.go:141] libmachine: (ha-805293-m03)     
	I0930 20:01:14.469658   26315 main.go:141] libmachine: (ha-805293-m03)     
	I0930 20:01:14.469664   26315 main.go:141] libmachine: (ha-805293-m03)   </devices>
	I0930 20:01:14.469672   26315 main.go:141] libmachine: (ha-805293-m03) </domain>
	I0930 20:01:14.469677   26315 main.go:141] libmachine: (ha-805293-m03) 
	I0930 20:01:14.476673   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:7e:5d:5f in network default
	I0930 20:01:14.477269   26315 main.go:141] libmachine: (ha-805293-m03) Ensuring networks are active...
	I0930 20:01:14.477295   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:14.478121   26315 main.go:141] libmachine: (ha-805293-m03) Ensuring network default is active
	I0930 20:01:14.478526   26315 main.go:141] libmachine: (ha-805293-m03) Ensuring network mk-ha-805293 is active
	I0930 20:01:14.478957   26315 main.go:141] libmachine: (ha-805293-m03) Getting domain xml...
	I0930 20:01:14.479718   26315 main.go:141] libmachine: (ha-805293-m03) Creating domain...
	I0930 20:01:15.747292   26315 main.go:141] libmachine: (ha-805293-m03) Waiting to get IP...
	I0930 20:01:15.748220   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:15.748679   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:15.748743   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:15.748666   27054 retry.go:31] will retry after 284.785124ms: waiting for machine to come up
	I0930 20:01:16.035256   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:16.035716   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:16.035831   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:16.035661   27054 retry.go:31] will retry after 335.488124ms: waiting for machine to come up
	I0930 20:01:16.373109   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:16.373683   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:16.373706   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:16.373645   27054 retry.go:31] will retry after 461.768045ms: waiting for machine to come up
	I0930 20:01:16.837400   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:16.837942   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:16.838002   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:16.837899   27054 retry.go:31] will retry after 451.939776ms: waiting for machine to come up
	I0930 20:01:17.291224   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:17.291638   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:17.291662   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:17.291600   27054 retry.go:31] will retry after 601.468058ms: waiting for machine to come up
	I0930 20:01:17.894045   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:17.894474   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:17.894502   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:17.894444   27054 retry.go:31] will retry after 685.014003ms: waiting for machine to come up
	I0930 20:01:18.581469   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:18.581905   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:18.581940   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:18.581886   27054 retry.go:31] will retry after 901.632295ms: waiting for machine to come up
	I0930 20:01:19.485606   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:19.486144   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:19.486174   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:19.486068   27054 retry.go:31] will retry after 1.002316049s: waiting for machine to come up
	I0930 20:01:20.489568   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:20.490064   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:20.490086   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:20.490017   27054 retry.go:31] will retry after 1.384559526s: waiting for machine to come up
	I0930 20:01:21.875542   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:21.875885   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:21.875904   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:21.875821   27054 retry.go:31] will retry after 1.560882287s: waiting for machine to come up
	I0930 20:01:23.438575   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:23.439019   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:23.439051   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:23.438971   27054 retry.go:31] will retry after 1.966635221s: waiting for machine to come up
	I0930 20:01:25.407626   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:25.408136   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:25.408170   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:25.408088   27054 retry.go:31] will retry after 2.861827785s: waiting for machine to come up
	I0930 20:01:28.272997   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:28.273395   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:28.273417   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:28.273357   27054 retry.go:31] will retry after 2.760760648s: waiting for machine to come up
	I0930 20:01:31.035244   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:31.035758   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:31.035806   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:31.035729   27054 retry.go:31] will retry after 3.889423891s: waiting for machine to come up
	I0930 20:01:34.927053   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:34.927650   26315 main.go:141] libmachine: (ha-805293-m03) Found IP for machine: 192.168.39.227
	I0930 20:01:34.927682   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has current primary IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:34.927690   26315 main.go:141] libmachine: (ha-805293-m03) Reserving static IP address...
	I0930 20:01:34.928071   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find host DHCP lease matching {name: "ha-805293-m03", mac: "52:54:00:ce:66:df", ip: "192.168.39.227"} in network mk-ha-805293
	I0930 20:01:35.005095   26315 main.go:141] libmachine: (ha-805293-m03) Reserved static IP address: 192.168.39.227
	I0930 20:01:35.005128   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Getting to WaitForSSH function...
	I0930 20:01:35.005135   26315 main.go:141] libmachine: (ha-805293-m03) Waiting for SSH to be available...
	I0930 20:01:35.007521   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.008053   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.008080   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.008244   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Using SSH client type: external
	I0930 20:01:35.008262   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa (-rw-------)
	I0930 20:01:35.008294   26315 main.go:141] libmachine: (ha-805293-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 20:01:35.008309   26315 main.go:141] libmachine: (ha-805293-m03) DBG | About to run SSH command:
	I0930 20:01:35.008328   26315 main.go:141] libmachine: (ha-805293-m03) DBG | exit 0
	I0930 20:01:35.131490   26315 main.go:141] libmachine: (ha-805293-m03) DBG | SSH cmd err, output: <nil>: 
	I0930 20:01:35.131786   26315 main.go:141] libmachine: (ha-805293-m03) KVM machine creation complete!
	I0930 20:01:35.132088   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetConfigRaw
	I0930 20:01:35.132882   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:35.133160   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:35.133330   26315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 20:01:35.133343   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetState
	I0930 20:01:35.134758   26315 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 20:01:35.134778   26315 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 20:01:35.134789   26315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 20:01:35.134797   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.137025   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.137368   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.137394   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.137501   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.137683   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.137839   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.137997   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.138162   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.138394   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.138405   26315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 20:01:35.238733   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:01:35.238763   26315 main.go:141] libmachine: Detecting the provisioner...
	I0930 20:01:35.238775   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.242022   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.242527   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.242562   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.242839   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.243050   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.243235   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.243427   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.243630   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.243832   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.243850   26315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 20:01:35.348183   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 20:01:35.348252   26315 main.go:141] libmachine: found compatible host: buildroot
	I0930 20:01:35.348261   26315 main.go:141] libmachine: Provisioning with buildroot...
	I0930 20:01:35.348268   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:35.348498   26315 buildroot.go:166] provisioning hostname "ha-805293-m03"
	I0930 20:01:35.348524   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:35.348749   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.351890   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.352398   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.352424   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.352577   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.352756   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.352894   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.353007   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.353167   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.353367   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.353384   26315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293-m03 && echo "ha-805293-m03" | sudo tee /etc/hostname
	I0930 20:01:35.473967   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293-m03
	
	I0930 20:01:35.473997   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.476729   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.477054   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.477085   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.477369   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.477567   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.477748   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.477907   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.478077   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.478253   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.478270   26315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:01:35.591650   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:01:35.591680   26315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:01:35.591697   26315 buildroot.go:174] setting up certificates
	I0930 20:01:35.591707   26315 provision.go:84] configureAuth start
	I0930 20:01:35.591715   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:35.591952   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:35.594901   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.595262   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.595286   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.595420   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.598100   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.598602   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.598626   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.598829   26315 provision.go:143] copyHostCerts
	I0930 20:01:35.598868   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:01:35.598917   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:01:35.598931   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:01:35.599012   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:01:35.599111   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:01:35.599134   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:01:35.599141   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:01:35.599179   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:01:35.599243   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:01:35.599270   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:01:35.599279   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:01:35.599331   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:01:35.599408   26315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293-m03 san=[127.0.0.1 192.168.39.227 ha-805293-m03 localhost minikube]
	I0930 20:01:35.796149   26315 provision.go:177] copyRemoteCerts
	I0930 20:01:35.796206   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:01:35.796242   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.798946   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.799340   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.799368   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.799648   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.799848   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.800023   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.800180   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:35.882427   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 20:01:35.882508   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:01:35.906794   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 20:01:35.906860   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 20:01:35.932049   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 20:01:35.932131   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 20:01:35.957426   26315 provision.go:87] duration metric: took 365.707269ms to configureAuth
	I0930 20:01:35.957459   26315 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:01:35.957679   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:01:35.957795   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.960499   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.960961   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.960996   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.961176   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.961403   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.961575   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.961765   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.961966   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.962139   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.962153   26315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:01:36.182253   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:01:36.182280   26315 main.go:141] libmachine: Checking connection to Docker...
	I0930 20:01:36.182288   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetURL
	I0930 20:01:36.183907   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Using libvirt version 6000000
	I0930 20:01:36.186215   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.186549   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.186590   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.186762   26315 main.go:141] libmachine: Docker is up and running!
	I0930 20:01:36.186776   26315 main.go:141] libmachine: Reticulating splines...
	I0930 20:01:36.186783   26315 client.go:171] duration metric: took 22.235285837s to LocalClient.Create
	I0930 20:01:36.186801   26315 start.go:167] duration metric: took 22.235357522s to libmachine.API.Create "ha-805293"
	I0930 20:01:36.186810   26315 start.go:293] postStartSetup for "ha-805293-m03" (driver="kvm2")
	I0930 20:01:36.186826   26315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:01:36.186842   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.187054   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:01:36.187077   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:36.189228   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.189551   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.189577   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.189754   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.189932   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.190098   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.190211   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:36.269942   26315 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:01:36.274174   26315 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:01:36.274204   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:01:36.274281   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:01:36.274373   26315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:01:36.274383   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 20:01:36.274490   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:01:36.284037   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:01:36.308961   26315 start.go:296] duration metric: took 122.135978ms for postStartSetup
	I0930 20:01:36.309010   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetConfigRaw
	I0930 20:01:36.309613   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:36.312777   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.313257   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.313307   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.313687   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:01:36.313894   26315 start.go:128] duration metric: took 22.382961104s to createHost
	I0930 20:01:36.313917   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:36.316229   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.316599   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.316627   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.316783   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.316957   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.317109   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.317219   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.317366   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:36.317526   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:36.317537   26315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:01:36.419858   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727726496.392744661
	
	I0930 20:01:36.419877   26315 fix.go:216] guest clock: 1727726496.392744661
	I0930 20:01:36.419884   26315 fix.go:229] Guest: 2024-09-30 20:01:36.392744661 +0000 UTC Remote: 2024-09-30 20:01:36.313905276 +0000 UTC m=+139.884995221 (delta=78.839385ms)
	I0930 20:01:36.419899   26315 fix.go:200] guest clock delta is within tolerance: 78.839385ms
	I0930 20:01:36.419904   26315 start.go:83] releasing machines lock for "ha-805293-m03", held for 22.489079696s
	I0930 20:01:36.419932   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.420201   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:36.422678   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.423024   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.423063   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.425360   26315 out.go:177] * Found network options:
	I0930 20:01:36.426711   26315 out.go:177]   - NO_PROXY=192.168.39.3,192.168.39.220
	W0930 20:01:36.427962   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 20:01:36.427990   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:01:36.428012   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.428657   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.428857   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.428967   26315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:01:36.429007   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	W0930 20:01:36.429092   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 20:01:36.429124   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:01:36.429190   26315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:01:36.429211   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:36.431941   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432202   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432300   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.432322   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432458   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.432598   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.432659   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.432683   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432755   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.432845   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.432915   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:36.432995   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.433083   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.433164   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:36.661994   26315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:01:36.669285   26315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:01:36.669354   26315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:01:36.686879   26315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 20:01:36.686911   26315 start.go:495] detecting cgroup driver to use...
	I0930 20:01:36.687008   26315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:01:36.703695   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:01:36.717831   26315 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:01:36.717898   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:01:36.732194   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:01:36.746205   26315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:01:36.873048   26315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:01:37.031067   26315 docker.go:233] disabling docker service ...
	I0930 20:01:37.031142   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:01:37.047034   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:01:37.059962   26315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:01:37.191501   26315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:01:37.302357   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:01:37.316910   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:01:37.336669   26315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:01:37.336739   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.347286   26315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:01:37.347361   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.357984   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.368059   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.379248   26315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:01:37.390460   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.401206   26315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.418758   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.428841   26315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:01:37.438255   26315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 20:01:37.438328   26315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 20:01:37.451070   26315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:01:37.460818   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:01:37.578097   26315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:01:37.670992   26315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:01:37.671072   26315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:01:37.675792   26315 start.go:563] Will wait 60s for crictl version
	I0930 20:01:37.675847   26315 ssh_runner.go:195] Run: which crictl
	I0930 20:01:37.679190   26315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:01:37.718042   26315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:01:37.718121   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:01:37.745873   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:01:37.774031   26315 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:01:37.775415   26315 out.go:177]   - env NO_PROXY=192.168.39.3
	I0930 20:01:37.776644   26315 out.go:177]   - env NO_PROXY=192.168.39.3,192.168.39.220
	I0930 20:01:37.777763   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:37.780596   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:37.780948   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:37.780970   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:37.781145   26315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:01:37.785213   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:01:37.797526   26315 mustload.go:65] Loading cluster: ha-805293
	I0930 20:01:37.797767   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:01:37.798120   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:37.798167   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:37.813162   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46385
	I0930 20:01:37.813567   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:37.814037   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:37.814052   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:37.814397   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:37.814604   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:01:37.816041   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:01:37.816336   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:37.816371   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:37.831585   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37645
	I0930 20:01:37.832045   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:37.832532   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:37.832557   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:37.832860   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:37.833026   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:01:37.833192   26315 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.227
	I0930 20:01:37.833209   26315 certs.go:194] generating shared ca certs ...
	I0930 20:01:37.833229   26315 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:01:37.833416   26315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:01:37.833471   26315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:01:37.833484   26315 certs.go:256] generating profile certs ...
	I0930 20:01:37.833587   26315 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 20:01:37.833619   26315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55
	I0930 20:01:37.833638   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.220 192.168.39.227 192.168.39.254]
	I0930 20:01:38.116566   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55 ...
	I0930 20:01:38.116596   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55: {Name:mkc0cd033bb8a494a4cf8a08dfd67f55b67932e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:01:38.116763   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55 ...
	I0930 20:01:38.116776   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55: {Name:mk85317566d0a2f89680d96c44f0e865cd88a3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:01:38.116847   26315 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 20:01:38.116983   26315 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 20:01:38.117102   26315 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 20:01:38.117117   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 20:01:38.117131   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 20:01:38.117145   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 20:01:38.117158   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 20:01:38.117175   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 20:01:38.117187   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 20:01:38.117198   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 20:01:38.131699   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 20:01:38.131811   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:01:38.131856   26315 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:01:38.131870   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:01:38.131902   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:01:38.131926   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:01:38.131956   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:01:38.132010   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:01:38.132045   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.132066   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.132084   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.132129   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:01:38.135411   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:38.135848   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:01:38.135875   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:38.136103   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:01:38.136307   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:01:38.136477   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:01:38.136602   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:01:38.215899   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 20:01:38.221340   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 20:01:38.232045   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 20:01:38.236011   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 20:01:38.247009   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 20:01:38.250999   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 20:01:38.261524   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 20:01:38.265766   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0930 20:01:38.275973   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 20:01:38.279940   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 20:01:38.289617   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 20:01:38.293330   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0930 20:01:38.303037   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:01:38.328067   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:01:38.353124   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:01:38.377109   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:01:38.402737   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 20:01:38.432128   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 20:01:38.459728   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:01:38.484047   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 20:01:38.508033   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:01:38.530855   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:01:38.554688   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:01:38.579730   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 20:01:38.595907   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 20:01:38.611657   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 20:01:38.627976   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0930 20:01:38.644290   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 20:01:38.662490   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0930 20:01:38.678795   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 20:01:38.694165   26315 ssh_runner.go:195] Run: openssl version
	I0930 20:01:38.699696   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:01:38.709850   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.714078   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.714128   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.719944   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:01:38.730979   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:01:38.741564   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.746132   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.746193   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.751872   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:01:38.763738   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:01:38.775831   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.780819   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.780877   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.786554   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:01:38.797347   26315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:01:38.801341   26315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 20:01:38.801400   26315 kubeadm.go:934] updating node {m03 192.168.39.227 8443 v1.31.1 crio true true} ...
	I0930 20:01:38.801503   26315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:01:38.801529   26315 kube-vip.go:115] generating kube-vip config ...
	I0930 20:01:38.801578   26315 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 20:01:38.819903   26315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 20:01:38.819976   26315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 20:01:38.820036   26315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:01:38.830324   26315 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 20:01:38.830375   26315 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 20:01:38.842272   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0930 20:01:38.842334   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:01:38.842272   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 20:01:38.842272   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0930 20:01:38.842419   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:01:38.842439   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:01:38.842489   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:01:38.842540   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:01:38.861520   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 20:01:38.861559   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 20:01:38.861581   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:01:38.861631   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 20:01:38.861657   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 20:01:38.861689   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:01:38.875651   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 20:01:38.875695   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 20:01:39.808722   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 20:01:39.819615   26315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 20:01:39.836414   26315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:01:39.853331   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 20:01:39.869585   26315 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 20:01:39.873243   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:01:39.884957   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:01:40.006850   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:01:40.022775   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:01:40.023225   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:40.023284   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:40.040829   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0930 20:01:40.041301   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:40.041861   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:40.041890   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:40.042247   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:40.042469   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:01:40.042649   26315 start.go:317] joinCluster: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:01:40.042812   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 20:01:40.042834   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:01:40.046258   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:40.046800   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:01:40.046821   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:40.047017   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:01:40.047286   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:01:40.047660   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:01:40.047833   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:01:40.209323   26315 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:01:40.209377   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1eegwc.d3x1pf4onbzzskk3 --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m03 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443"
	I0930 20:02:03.693864   26315 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1eegwc.d3x1pf4onbzzskk3 --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m03 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443": (23.484455167s)
	I0930 20:02:03.693901   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 20:02:04.227863   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-805293-m03 minikube.k8s.io/updated_at=2024_09_30T20_02_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=ha-805293 minikube.k8s.io/primary=false
	I0930 20:02:04.356839   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-805293-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 20:02:04.460804   26315 start.go:319] duration metric: took 24.418151981s to joinCluster
	I0930 20:02:04.460890   26315 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:02:04.461213   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:02:04.462900   26315 out.go:177] * Verifying Kubernetes components...
	I0930 20:02:04.464457   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:02:04.710029   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:02:04.776170   26315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:02:04.776405   26315 kapi.go:59] client config for ha-805293: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 20:02:04.776460   26315 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.3:8443
	I0930 20:02:04.776741   26315 node_ready.go:35] waiting up to 6m0s for node "ha-805293-m03" to be "Ready" ...
	I0930 20:02:04.776826   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:04.776836   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:04.776843   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:04.776849   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:04.780756   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:05.277289   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:05.277316   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:05.277328   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:05.277336   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:05.280839   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:05.777768   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:05.777793   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:05.777802   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:05.777810   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:05.781540   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:06.277679   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:06.277703   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:06.277713   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:06.277719   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:06.281145   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:06.777911   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:06.777937   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:06.777949   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:06.777955   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:06.781669   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:06.782486   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:07.277405   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:07.277428   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:07.277435   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:07.277438   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:07.281074   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:07.776952   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:07.776984   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:07.777005   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:07.777010   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:07.780689   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:08.277555   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:08.277576   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:08.277583   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:08.277587   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:08.283539   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:02:08.777360   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:08.777381   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:08.777390   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:08.777394   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:08.780937   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:09.277721   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:09.277758   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:09.277768   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:09.277772   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:09.285233   26315 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 20:02:09.285662   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:09.776955   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:09.776977   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:09.776987   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:09.776992   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:09.781593   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:10.277015   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:10.277033   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:10.277045   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:10.277049   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:10.281851   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:10.777471   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:10.777502   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:10.777513   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:10.777518   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:10.780948   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:11.277959   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:11.277977   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:11.277985   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:11.277989   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:11.401106   26315 round_trippers.go:574] Response Status: 200 OK in 123 milliseconds
	I0930 20:02:11.401822   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:11.777418   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:11.777439   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:11.777447   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:11.777451   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:11.780577   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:12.277563   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:12.277586   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:12.277594   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:12.277600   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:12.280508   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:12.777614   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:12.777635   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:12.777644   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:12.777649   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:12.780589   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:13.277609   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:13.277647   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:13.277658   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:13.277664   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:13.280727   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:13.777657   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:13.777684   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:13.777692   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:13.777699   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:13.781417   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:13.781894   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:14.277640   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:14.277665   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:14.277674   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:14.277678   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:14.281731   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:14.777599   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:14.777622   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:14.777633   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:14.777638   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:14.780768   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:15.277270   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:15.277293   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:15.277302   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:15.277308   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:15.281504   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:15.777339   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:15.777363   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:15.777374   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:15.777380   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:15.780737   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:16.277475   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:16.277500   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:16.277508   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:16.277513   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:16.281323   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:16.281879   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:16.777003   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:16.777026   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:16.777033   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:16.777038   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:16.780794   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:17.277324   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:17.277345   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:17.277353   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:17.277362   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:17.281320   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:17.777286   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:17.777313   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:17.777323   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:17.777329   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:17.781420   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:18.277338   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:18.277361   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:18.277369   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:18.277374   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:18.280798   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:18.777933   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:18.777955   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:18.777963   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:18.777967   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:18.781895   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:18.782295   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:19.277039   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:19.277062   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:19.277070   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:19.277074   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:19.280872   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:19.776906   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:19.776931   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:19.776941   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:19.776945   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:19.789070   26315 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0930 20:02:20.277619   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:20.277645   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:20.277657   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:20.277664   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:20.281050   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:20.777108   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:20.777132   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:20.777140   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:20.777145   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:20.780896   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:21.277715   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:21.277737   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:21.277746   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:21.277750   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:21.281198   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:21.281766   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:21.777774   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:21.777798   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:21.777812   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:21.777818   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:21.781858   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:22.277699   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:22.277726   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.277737   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.277741   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.281520   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.777562   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:22.777588   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.777599   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.777606   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.781172   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.781900   26315 node_ready.go:49] node "ha-805293-m03" has status "Ready":"True"
	I0930 20:02:22.781919   26315 node_ready.go:38] duration metric: took 18.00516261s for node "ha-805293-m03" to be "Ready" ...
	I0930 20:02:22.781930   26315 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:02:22.782018   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:22.782034   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.782045   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.782050   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.788078   26315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 20:02:22.794707   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.794792   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-x7zjp
	I0930 20:02:22.794802   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.794843   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.794851   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.798283   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.799034   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:22.799049   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.799059   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.799063   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.802512   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.803017   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.803034   26315 pod_ready.go:82] duration metric: took 8.303758ms for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.803043   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.803100   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-z4bkv
	I0930 20:02:22.803108   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.803115   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.803120   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.805708   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.806288   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:22.806303   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.806309   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.806314   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.808794   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.809193   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.809210   26315 pod_ready.go:82] duration metric: took 6.159698ms for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.809221   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.809280   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293
	I0930 20:02:22.809291   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.809302   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.809310   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.811844   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.812420   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:22.812435   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.812441   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.812443   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.814572   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.815425   26315 pod_ready.go:93] pod "etcd-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.815446   26315 pod_ready.go:82] duration metric: took 6.21739ms for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.815467   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.815571   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m02
	I0930 20:02:22.815579   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.815589   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.815596   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.819297   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.820054   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:22.820071   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.820078   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.820082   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.822946   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.823362   26315 pod_ready.go:93] pod "etcd-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.823377   26315 pod_ready.go:82] duration metric: took 7.903457ms for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.823386   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.977860   26315 request.go:632] Waited for 154.412889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m03
	I0930 20:02:22.977929   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m03
	I0930 20:02:22.977936   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.977947   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.977956   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.981875   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.177702   26315 request.go:632] Waited for 195.197886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:23.177761   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:23.177766   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.177774   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.177779   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.180898   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.181332   26315 pod_ready.go:93] pod "etcd-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:23.181350   26315 pod_ready.go:82] duration metric: took 357.955948ms for pod "etcd-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.181366   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.377609   26315 request.go:632] Waited for 196.161944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:02:23.377673   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:02:23.377681   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.377691   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.377697   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.381213   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.578424   26315 request.go:632] Waited for 196.368077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:23.578500   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:23.578506   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.578514   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.578528   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.581799   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.582390   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:23.582406   26315 pod_ready.go:82] duration metric: took 401.034594ms for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.582416   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.778543   26315 request.go:632] Waited for 196.052617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:02:23.778624   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:02:23.778633   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.778643   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.778653   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.781828   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.977855   26315 request.go:632] Waited for 195.382083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:23.977924   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:23.977944   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.977959   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.977965   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.981372   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.982066   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:23.982087   26315 pod_ready.go:82] duration metric: took 399.664005ms for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.982100   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.178123   26315 request.go:632] Waited for 195.960731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m03
	I0930 20:02:24.178196   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m03
	I0930 20:02:24.178203   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.178211   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.178236   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.182112   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:24.378558   26315 request.go:632] Waited for 195.433009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:24.378638   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:24.378643   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.378650   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.378656   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.382291   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:24.382917   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:24.382938   26315 pod_ready.go:82] duration metric: took 400.829354ms for pod "kube-apiserver-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.382948   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.577887   26315 request.go:632] Waited for 194.863294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:02:24.577956   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:02:24.577963   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.577971   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.577978   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.581564   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:24.778150   26315 request.go:632] Waited for 195.36459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:24.778203   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:24.778208   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.778216   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.778221   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.781210   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:24.781808   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:24.781826   26315 pod_ready.go:82] duration metric: took 398.871488ms for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.781839   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.977967   26315 request.go:632] Waited for 196.028192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:02:24.978039   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:02:24.978046   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.978055   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.978062   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.981635   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.177628   26315 request.go:632] Waited for 195.118197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:25.177702   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:25.177707   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.177715   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.177722   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.184032   26315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 20:02:25.185117   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:25.185151   26315 pod_ready.go:82] duration metric: took 403.303748ms for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.185168   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.378088   26315 request.go:632] Waited for 192.829504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m03
	I0930 20:02:25.378247   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m03
	I0930 20:02:25.378262   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.378274   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.378284   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.382197   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.578183   26315 request.go:632] Waited for 195.374549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:25.578237   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:25.578241   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.578249   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.578273   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.581302   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.581967   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:25.581990   26315 pod_ready.go:82] duration metric: took 396.812632ms for pod "kube-controller-manager-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.582004   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.778066   26315 request.go:632] Waited for 195.961131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:02:25.778120   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:02:25.778125   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.778132   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.778136   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.781487   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.977671   26315 request.go:632] Waited for 195.30691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:25.977755   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:25.977762   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.977769   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.977775   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.981674   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.982338   26315 pod_ready.go:93] pod "kube-proxy-6gnt4" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:25.982360   26315 pod_ready.go:82] duration metric: took 400.349266ms for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.982370   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b9cpp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.178400   26315 request.go:632] Waited for 195.958284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b9cpp
	I0930 20:02:26.178455   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b9cpp
	I0930 20:02:26.178460   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.178468   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.178474   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.181740   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.377643   26315 request.go:632] Waited for 195.301602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:26.377715   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:26.377720   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.377730   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.377736   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.381534   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.382336   26315 pod_ready.go:93] pod "kube-proxy-b9cpp" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:26.382356   26315 pod_ready.go:82] duration metric: took 399.97947ms for pod "kube-proxy-b9cpp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.382369   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.578135   26315 request.go:632] Waited for 195.696435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:02:26.578222   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:02:26.578231   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.578239   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.578246   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.581969   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.778092   26315 request.go:632] Waited for 195.270119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:26.778175   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:26.778183   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.778194   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.778204   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.781951   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.782497   26315 pod_ready.go:93] pod "kube-proxy-vptrg" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:26.782530   26315 pod_ready.go:82] duration metric: took 400.140578ms for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.782542   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.978290   26315 request.go:632] Waited for 195.637761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:02:26.978361   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:02:26.978368   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.978377   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.978381   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.982459   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:27.178413   26315 request.go:632] Waited for 195.235139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:27.178464   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:27.178469   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.178476   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.178479   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.182089   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.182674   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:27.182695   26315 pod_ready.go:82] duration metric: took 400.147259ms for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.182706   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.377673   26315 request.go:632] Waited for 194.89364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:02:27.377752   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:02:27.377758   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.377765   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.377769   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.381356   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.578554   26315 request.go:632] Waited for 196.443432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:27.578622   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:27.578630   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.578641   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.578647   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.582325   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.582942   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:27.582965   26315 pod_ready.go:82] duration metric: took 400.251961ms for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.582978   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.778055   26315 request.go:632] Waited for 195.008545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m03
	I0930 20:02:27.778129   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m03
	I0930 20:02:27.778135   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.778142   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.778147   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.782023   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.977660   26315 request.go:632] Waited for 194.950522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:27.977742   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:27.977752   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.977762   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.977769   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.981329   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.981878   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:27.981905   26315 pod_ready.go:82] duration metric: took 398.919132ms for pod "kube-scheduler-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.981920   26315 pod_ready.go:39] duration metric: took 5.199971217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:02:27.981939   26315 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:02:27.982009   26315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:02:27.999589   26315 api_server.go:72] duration metric: took 23.538667198s to wait for apiserver process to appear ...
	I0930 20:02:27.999616   26315 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:02:27.999635   26315 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0930 20:02:28.006690   26315 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I0930 20:02:28.006768   26315 round_trippers.go:463] GET https://192.168.39.3:8443/version
	I0930 20:02:28.006788   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.006799   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.006804   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.008072   26315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0930 20:02:28.008144   26315 api_server.go:141] control plane version: v1.31.1
	I0930 20:02:28.008163   26315 api_server.go:131] duration metric: took 8.540356ms to wait for apiserver health ...
	I0930 20:02:28.008173   26315 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:02:28.178582   26315 request.go:632] Waited for 170.336703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.178653   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.178673   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.178683   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.178688   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.186196   26315 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 20:02:28.192615   26315 system_pods.go:59] 24 kube-system pods found
	I0930 20:02:28.192646   26315 system_pods.go:61] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:02:28.192651   26315 system_pods.go:61] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:02:28.192656   26315 system_pods.go:61] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:02:28.192661   26315 system_pods.go:61] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:02:28.192665   26315 system_pods.go:61] "etcd-ha-805293-m03" [c87078d8-ee99-4a5f-9258-cf5d7e658388] Running
	I0930 20:02:28.192668   26315 system_pods.go:61] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:02:28.192671   26315 system_pods.go:61] "kindnet-qrhb8" [852c4080-9210-47bb-a06a-d1b8bcff580d] Running
	I0930 20:02:28.192675   26315 system_pods.go:61] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:02:28.192679   26315 system_pods.go:61] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:02:28.192682   26315 system_pods.go:61] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:02:28.192687   26315 system_pods.go:61] "kube-apiserver-ha-805293-m03" [6fb5a285-7f35-4eb2-b028-6bd9fcfd21fe] Running
	I0930 20:02:28.192691   26315 system_pods.go:61] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:02:28.192695   26315 system_pods.go:61] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:02:28.192698   26315 system_pods.go:61] "kube-controller-manager-ha-805293-m03" [35d67e4a-f434-49df-8fb9-c6fcc725d8ff] Running
	I0930 20:02:28.192702   26315 system_pods.go:61] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:02:28.192706   26315 system_pods.go:61] "kube-proxy-b9cpp" [c828ff6a-6cbb-4a29-84bc-118522687da8] Running
	I0930 20:02:28.192710   26315 system_pods.go:61] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:02:28.192714   26315 system_pods.go:61] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:02:28.192720   26315 system_pods.go:61] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:02:28.192723   26315 system_pods.go:61] "kube-scheduler-ha-805293-m03" [34e2edf8-ca25-4a7c-a626-ac037b40b905] Running
	I0930 20:02:28.192729   26315 system_pods.go:61] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:02:28.192732   26315 system_pods.go:61] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:02:28.192735   26315 system_pods.go:61] "kube-vip-ha-805293-m03" [fcc5a165-5430-45d3-8ec7-fbdf5adc7e20] Running
	I0930 20:02:28.192738   26315 system_pods.go:61] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:02:28.192747   26315 system_pods.go:74] duration metric: took 184.564973ms to wait for pod list to return data ...
	I0930 20:02:28.192756   26315 default_sa.go:34] waiting for default service account to be created ...
	I0930 20:02:28.378324   26315 request.go:632] Waited for 185.488908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:02:28.378382   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:02:28.378387   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.378394   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.378398   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.382352   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:28.382515   26315 default_sa.go:45] found service account: "default"
	I0930 20:02:28.382532   26315 default_sa.go:55] duration metric: took 189.767008ms for default service account to be created ...
	I0930 20:02:28.382546   26315 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 20:02:28.578010   26315 request.go:632] Waited for 195.370903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.578070   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.578076   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.578083   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.578087   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.584177   26315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 20:02:28.592272   26315 system_pods.go:86] 24 kube-system pods found
	I0930 20:02:28.592310   26315 system_pods.go:89] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:02:28.592319   26315 system_pods.go:89] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:02:28.592330   26315 system_pods.go:89] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:02:28.592336   26315 system_pods.go:89] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:02:28.592341   26315 system_pods.go:89] "etcd-ha-805293-m03" [c87078d8-ee99-4a5f-9258-cf5d7e658388] Running
	I0930 20:02:28.592346   26315 system_pods.go:89] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:02:28.592351   26315 system_pods.go:89] "kindnet-qrhb8" [852c4080-9210-47bb-a06a-d1b8bcff580d] Running
	I0930 20:02:28.592357   26315 system_pods.go:89] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:02:28.592363   26315 system_pods.go:89] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:02:28.592368   26315 system_pods.go:89] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:02:28.592374   26315 system_pods.go:89] "kube-apiserver-ha-805293-m03" [6fb5a285-7f35-4eb2-b028-6bd9fcfd21fe] Running
	I0930 20:02:28.592381   26315 system_pods.go:89] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:02:28.592388   26315 system_pods.go:89] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:02:28.592397   26315 system_pods.go:89] "kube-controller-manager-ha-805293-m03" [35d67e4a-f434-49df-8fb9-c6fcc725d8ff] Running
	I0930 20:02:28.592404   26315 system_pods.go:89] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:02:28.592410   26315 system_pods.go:89] "kube-proxy-b9cpp" [c828ff6a-6cbb-4a29-84bc-118522687da8] Running
	I0930 20:02:28.592416   26315 system_pods.go:89] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:02:28.592422   26315 system_pods.go:89] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:02:28.592430   26315 system_pods.go:89] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:02:28.592436   26315 system_pods.go:89] "kube-scheduler-ha-805293-m03" [34e2edf8-ca25-4a7c-a626-ac037b40b905] Running
	I0930 20:02:28.592442   26315 system_pods.go:89] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:02:28.592450   26315 system_pods.go:89] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:02:28.592455   26315 system_pods.go:89] "kube-vip-ha-805293-m03" [fcc5a165-5430-45d3-8ec7-fbdf5adc7e20] Running
	I0930 20:02:28.592461   26315 system_pods.go:89] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:02:28.592472   26315 system_pods.go:126] duration metric: took 209.917591ms to wait for k8s-apps to be running ...
	I0930 20:02:28.592485   26315 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 20:02:28.592534   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:02:28.608637   26315 system_svc.go:56] duration metric: took 16.145321ms WaitForService to wait for kubelet
	I0930 20:02:28.608674   26315 kubeadm.go:582] duration metric: took 24.147753749s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:02:28.608696   26315 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:02:28.778132   26315 request.go:632] Waited for 169.34168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes
	I0930 20:02:28.778186   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I0930 20:02:28.778191   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.778198   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.778202   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.782435   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:28.783582   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:02:28.783605   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:02:28.783617   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:02:28.783621   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:02:28.783625   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:02:28.783628   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:02:28.783633   26315 node_conditions.go:105] duration metric: took 174.931399ms to run NodePressure ...
	I0930 20:02:28.783649   26315 start.go:241] waiting for startup goroutines ...
	I0930 20:02:28.783678   26315 start.go:255] writing updated cluster config ...
	I0930 20:02:28.783989   26315 ssh_runner.go:195] Run: rm -f paused
	I0930 20:02:28.838018   26315 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 20:02:28.840509   26315 out.go:177] * Done! kubectl is now configured to use "ha-805293" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.681911582Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a20cadf1-9f78-4ca2-b400-00f3e799a5e7 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.683122064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ec1a89a-82a9-40ff-80ac-1157e129c725 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.683714471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726780683684154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ec1a89a-82a9-40ff-80ac-1157e129c725 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.684410300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25b35920-5f36-4ea1-968b-3a41e2c30d3a name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.684467563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25b35920-5f36-4ea1-968b-3a41e2c30d3a name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.684719501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25b35920-5f36-4ea1-968b-3a41e2c30d3a name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.705430979Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b009d0ee-c41e-4142-a4b1-be5e82625234 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.705724287Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-r27jf,Uid:8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727726550148683985,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:02:29.829247076Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-z4bkv,Uid:c6ba0288-138e-4690-a68d-6d6378e28deb,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1727726414032844879,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:00:13.716538200Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1912fdf8-d789-4ba9-99ff-c87ccbf330ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727726414031848787,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-30T20:00:13.713726371Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-x7zjp,Uid:b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1727726414018743460,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:00:13.706232430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&PodSandboxMetadata{Name:kube-proxy-6gnt4,Uid:a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727726401875351517,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-09-30T20:00:00.921254096Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&PodSandboxMetadata{Name:kindnet-slhtm,Uid:a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727726401840963162,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:00:00.924871676Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&PodSandboxMetadata{Name:etcd-ha-805293,Uid:0dc042ef6adb6bb0f327bb59cec9a57d,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1727726390010803949,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.3:2379,kubernetes.io/config.hash: 0dc042ef6adb6bb0f327bb59cec9a57d,kubernetes.io/config.seen: 2024-09-30T19:59:49.539868097Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-805293,Uid:0e187d2ff3fb002e09fae92363c4994b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727726390002598546,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb
002e09fae92363c4994b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.3:8443,kubernetes.io/config.hash: 0e187d2ff3fb002e09fae92363c4994b,kubernetes.io/config.seen: 2024-09-30T19:59:49.539872588Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-805293,Uid:7ab114a2582827f884939bc3a1a2f15f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727726389998037128,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{kubernetes.io/config.hash: 7ab114a2582827f884939bc3a1a2f15f,kubernetes.io/config.seen: 2024-09-30T19:59:49.539876198Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6fc84ff2f4f9e09491da5bb8
f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-805293,Uid:91de2f71b33d8668e0d24248c5ba505a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727726389993946007,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 91de2f71b33d8668e0d24248c5ba505a,kubernetes.io/config.seen: 2024-09-30T19:59:49.539873996Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-805293,Uid:f33fa137f85dfeea3a67cdcccdd92a29,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727726389993391808,Labels:map[string]string{component: kube-scheduler,io.kuberne
tes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f33fa137f85dfeea3a67cdcccdd92a29,kubernetes.io/config.seen: 2024-09-30T19:59:49.539875185Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b009d0ee-c41e-4142-a4b1-be5e82625234 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.706412031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a7d7384-2e19-43b0-baba-5bdcabe9ddf2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.706475052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a7d7384-2e19-43b0-baba-5bdcabe9ddf2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.706727440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a7d7384-2e19-43b0-baba-5bdcabe9ddf2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.726046057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=329055de-25bd-4ba7-8204-0c8116db114a name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.726119600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=329055de-25bd-4ba7-8204-0c8116db114a name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.727785896Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e4a9771-ccdf-4bf6-8ad4-eabec2b6a0f0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.728211869Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726780728184856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e4a9771-ccdf-4bf6-8ad4-eabec2b6a0f0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.728823106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5637f694-1a56-4a21-a89e-ed1aa962e6d7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.728883152Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5637f694-1a56-4a21-a89e-ed1aa962e6d7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.729125421Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5637f694-1a56-4a21-a89e-ed1aa962e6d7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.766627989Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a31cb81d-b1b7-4657-89af-9c5c66d6c93b name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.766706273Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a31cb81d-b1b7-4657-89af-9c5c66d6c93b name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.767909781Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f44f8a67-ccec-40b8-ad37-38dd00cc8de8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.768387282Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726780768361357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f44f8a67-ccec-40b8-ad37-38dd00cc8de8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.769090036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2560b5c2-c25f-400f-844f-8f36b70bc296 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.769186375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2560b5c2-c25f-400f-844f-8f36b70bc296 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:20 ha-805293 crio[655]: time="2024-09-30 20:06:20.769501750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2560b5c2-c25f-400f-844f-8f36b70bc296 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10ee59c77c769       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   a8d4349f6e0b0       busybox-7dff88458-r27jf
	8c540e4668f99       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   f95d30afc0491       coredns-7c65d6cfc9-x7zjp
	6d01ed71d852e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   2a39bd6449f5a       storage-provisioner
	beba42a2bf035       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   626fdaeb1b142       coredns-7c65d6cfc9-z4bkv
	e28b6781ed449       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   36a3293339cae       kindnet-slhtm
	cd73b6dc43348       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   27a0913ae182a       kube-proxy-6gnt4
	5e8e1f537ce94       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   1fd2dbf5f5af0       kube-vip-ha-805293
	0e9fbbe2017da       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   6fc84ff2f4f9e       kube-controller-manager-ha-805293
	9b8d5baa6998a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   73733467afdd9       kube-scheduler-ha-805293
	219dff1c43cd4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   bff718c807eb7       etcd-ha-805293
	994c927aa147a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   ec25e9867db7c       kube-apiserver-ha-805293
	
	
	==> coredns [8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b] <==
	[INFO] 10.244.0.4:54656 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002122445s
	[INFO] 10.244.1.2:43325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000298961s
	[INFO] 10.244.1.2:50368 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261008s
	[INFO] 10.244.1.2:34858 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000270623s
	[INFO] 10.244.1.2:59975 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192447s
	[INFO] 10.244.2.2:37486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233576s
	[INFO] 10.244.2.2:40647 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002177996s
	[INFO] 10.244.2.2:39989 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196915s
	[INFO] 10.244.2.2:42105 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001612348s
	[INFO] 10.244.2.2:42498 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180331s
	[INFO] 10.244.2.2:34873 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000262642s
	[INFO] 10.244.0.4:55282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002337707s
	[INFO] 10.244.0.4:52721 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082276s
	[INFO] 10.244.0.4:33773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001975703s
	[INFO] 10.244.0.4:44087 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095899s
	[INFO] 10.244.1.2:44456 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189431s
	[INFO] 10.244.1.2:52532 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112979s
	[INFO] 10.244.1.2:39707 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095712s
	[INFO] 10.244.2.2:42900 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101241s
	[INFO] 10.244.0.4:56608 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134276s
	[INFO] 10.244.1.2:35939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00031266s
	[INFO] 10.244.1.2:48131 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196792s
	[INFO] 10.244.2.2:40732 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154649s
	[INFO] 10.244.0.4:51180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000206094s
	[INFO] 10.244.0.4:36921 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000118718s
	
	
	==> coredns [beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c] <==
	[INFO] 10.244.0.4:43879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219235s
	[INFO] 10.244.1.2:54557 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005324153s
	[INFO] 10.244.1.2:59221 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00021778s
	[INFO] 10.244.1.2:56069 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0044481s
	[INFO] 10.244.1.2:50386 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00023413s
	[INFO] 10.244.2.2:46506 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103313s
	[INFO] 10.244.2.2:41909 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177677s
	[INFO] 10.244.0.4:57981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180642s
	[INFO] 10.244.0.4:42071 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100781s
	[INFO] 10.244.0.4:53066 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079995s
	[INFO] 10.244.0.4:54192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095317s
	[INFO] 10.244.1.2:42705 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147435s
	[INFO] 10.244.2.2:42448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014108s
	[INFO] 10.244.2.2:58687 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152745s
	[INFO] 10.244.2.2:59433 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159734s
	[INFO] 10.244.0.4:34822 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086009s
	[INFO] 10.244.0.4:46188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067594s
	[INFO] 10.244.0.4:33829 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130532s
	[INFO] 10.244.1.2:56575 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000557946s
	[INFO] 10.244.1.2:41726 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145733s
	[INFO] 10.244.2.2:56116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108892s
	[INFO] 10.244.2.2:58958 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075413s
	[INFO] 10.244.2.2:42001 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077659s
	[INFO] 10.244.0.4:53905 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091303s
	[INFO] 10.244.0.4:41906 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000098967s
	
	
	==> describe nodes <==
	Name:               ha-805293
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T19_59_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 19:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:06:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 20:00:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-805293
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 866f17ca2f8945bb8c8d7336ea64bab7
	  System UUID:                866f17ca-2f89-45bb-8c8d-7336ea64bab7
	  Boot ID:                    688ba3e5-bec7-403a-8a14-d517107abdf5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-r27jf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 coredns-7c65d6cfc9-x7zjp             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 coredns-7c65d6cfc9-z4bkv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 etcd-ha-805293                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m25s
	  kube-system                 kindnet-slhtm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m21s
	  kube-system                 kube-apiserver-ha-805293             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-controller-manager-ha-805293    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-proxy-6gnt4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-ha-805293             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-vip-ha-805293                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m18s  kube-proxy       
	  Normal  Starting                 6m25s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m25s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m25s  kubelet          Node ha-805293 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s  kubelet          Node ha-805293 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s  kubelet          Node ha-805293 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m21s  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal  NodeReady                6m8s   kubelet          Node ha-805293 status is now: NodeReady
	  Normal  RegisteredNode           5m25s  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal  RegisteredNode           4m11s  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	
	
	Name:               ha-805293-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_00_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:00:48 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:03:41 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-805293-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d0700264de549a1be3f1020308847ab
	  System UUID:                4d070026-4de5-49a1-be3f-1020308847ab
	  Boot ID:                    6a7fa1c9-5f0b-4080-a967-4e6a9eb2c122
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lshpm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 etcd-ha-805293-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m31s
	  kube-system                 kindnet-lfldt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m33s
	  kube-system                 kube-apiserver-ha-805293-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-controller-manager-ha-805293-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-proxy-vptrg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-scheduler-ha-805293-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-vip-ha-805293-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m28s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m33s (x8 over 5m34s)  kubelet          Node ha-805293-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s (x8 over 5m34s)  kubelet          Node ha-805293-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s (x7 over 5m34s)  kubelet          Node ha-805293-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  NodeNotReady             116s                   node-controller  Node ha-805293-m02 status is now: NodeNotReady
	
	
	Name:               ha-805293-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_02_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:02:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:06:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-805293-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d290a9661d284f5abbb0966111b1ff62
	  System UUID:                d290a966-1d28-4f5a-bbb0-966111b1ff62
	  Boot ID:                    4480564e-4012-421d-8e2a-ef45c5701e0e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nfncv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 etcd-ha-805293-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kindnet-qrhb8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m20s
	  kube-system                 kube-apiserver-ha-805293-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-ha-805293-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-proxy-b9cpp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-scheduler-ha-805293-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-vip-ha-805293-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m20s (x8 over 4m20s)  kubelet          Node ha-805293-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s (x8 over 4m20s)  kubelet          Node ha-805293-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s (x7 over 4m20s)  kubelet          Node ha-805293-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	
	
	Name:               ha-805293-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_03_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:03:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:06:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-805293-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66e464978dbd400d9e13327c67f50978
	  System UUID:                66e46497-8dbd-400d-9e13-327c67f50978
	  Boot ID:                    e58b57f2-9a1b-47d7-b35d-6de7e20bd5ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pk4z9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-7hn94    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m15s)  kubelet          Node ha-805293-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m15s)  kubelet          Node ha-805293-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m15s)  kubelet          Node ha-805293-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-805293-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep30 19:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051498] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038050] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.756373] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.910183] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.882465] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.789974] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.062566] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063093] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.202518] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.124623] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.268552] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +3.977529] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.564932] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.062130] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.342874] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.088317] kauditd_printk_skb: 79 callbacks suppressed
	[Sep30 20:00] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.197664] kauditd_printk_skb: 38 callbacks suppressed
	[ +40.392588] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c] <==
	{"level":"warn","ts":"2024-09-30T20:06:20.909906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:20.943021Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.042655Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.046114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.059866Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.064235Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.076423Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.083216Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.089914Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.093731Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.097187Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.108548Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.114887Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.121006Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.124618Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.127618Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.133506Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.139760Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.142505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.145790Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.149215Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.152380Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.156639Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.162752Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:21.169503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:06:21 up 7 min,  0 users,  load average: 0.27, 0.26, 0.13
	Linux ha-805293 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa] <==
	I0930 20:05:43.361802       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:05:53.361412       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:05:53.361456       1 main.go:299] handling current node
	I0930 20:05:53.361477       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:05:53.361484       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:05:53.361668       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:05:53.361697       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:05:53.361813       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:05:53.361841       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:06:03.353152       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:06:03.353232       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:06:03.353604       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:06:03.353656       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:06:03.353788       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:06:03.353817       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:06:03.353915       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:06:03.353945       1 main.go:299] handling current node
	I0930 20:06:13.352401       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:06:13.352462       1 main.go:299] handling current node
	I0930 20:06:13.352487       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:06:13.352493       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:06:13.352648       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:06:13.352669       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:06:13.352727       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:06:13.352744       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78] <==
	I0930 19:59:55.232483       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0930 19:59:55.241927       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.3]
	I0930 19:59:55.242751       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 19:59:55.248161       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 19:59:56.585015       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 19:59:56.606454       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0930 19:59:56.717747       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 20:00:00.619178       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0930 20:00:00.866886       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0930 20:02:35.103260       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54756: use of closed network connection
	E0930 20:02:35.310204       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54774: use of closed network connection
	E0930 20:02:35.528451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54798: use of closed network connection
	E0930 20:02:35.718056       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54824: use of closed network connection
	E0930 20:02:35.905602       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54834: use of closed network connection
	E0930 20:02:36.095718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54846: use of closed network connection
	E0930 20:02:36.292842       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54870: use of closed network connection
	E0930 20:02:36.507445       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54880: use of closed network connection
	E0930 20:02:36.711017       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54890: use of closed network connection
	E0930 20:02:37.027891       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54906: use of closed network connection
	E0930 20:02:37.211934       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54928: use of closed network connection
	E0930 20:02:37.400557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54946: use of closed network connection
	E0930 20:02:37.592034       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54964: use of closed network connection
	E0930 20:02:37.769244       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54968: use of closed network connection
	E0930 20:02:37.945689       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54986: use of closed network connection
	W0930 20:04:05.250494       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.227 192.168.39.3]
	
	
	==> kube-controller-manager [0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963] <==
	I0930 20:03:07.394951       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-805293-m04" podCIDRs=["10.244.3.0/24"]
	I0930 20:03:07.395481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:07.396749       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:07.436135       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:07.684943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:08.073414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:10.185795       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-805293-m04"
	I0930 20:03:10.251142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:10.326069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:10.383451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:11.395780       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:11.488119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:17.639978       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:28.022240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:28.023330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-805293-m04"
	I0930 20:03:28.045054       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:30.206023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:37.957274       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:04:25.230773       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-805293-m04"
	I0930 20:04:25.230955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	I0930 20:04:25.255656       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	I0930 20:04:25.398159       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	I0930 20:04:25.408524       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.658854ms"
	I0930 20:04:25.408627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.436µs"
	I0930 20:04:30.476044       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	
	
	==> kube-proxy [cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:00:02.260002       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 20:00:02.292313       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.3"]
	E0930 20:00:02.293761       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:00:02.331058       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:00:02.331111       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:00:02.331136       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:00:02.334264       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:00:02.334706       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:00:02.334732       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:00:02.338075       1 config.go:199] "Starting service config controller"
	I0930 20:00:02.338115       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:00:02.338141       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:00:02.338146       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:00:02.340129       1 config.go:328] "Starting node config controller"
	I0930 20:00:02.340159       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:00:02.438958       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 20:00:02.439119       1 shared_informer.go:320] Caches are synced for service config
	I0930 20:00:02.440633       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463] <==
	W0930 19:59:54.471920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 19:59:54.472044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.522920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.524738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.525008       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 19:59:54.525097       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 19:59:54.570077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.570416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.573175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.573222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.611352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 19:59:54.611460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.614509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 19:59:54.614660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.659257       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 19:59:54.659351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.769876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.770087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 19:59:56.900381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 20:02:01.539050       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-h6pvg\": pod kube-proxy-h6pvg is already assigned to node \"ha-805293-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-h6pvg" node="ha-805293-m03"
	E0930 20:02:01.539424       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9860392c-eca6-4200-9b6e-f0a6f51b523b(kube-system/kube-proxy-h6pvg) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-h6pvg"
	E0930 20:02:01.539482       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-h6pvg\": pod kube-proxy-h6pvg is already assigned to node \"ha-805293-m03\"" pod="kube-system/kube-proxy-h6pvg"
	I0930 20:02:01.539558       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-h6pvg" node="ha-805293-m03"
	E0930 20:02:29.833811       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lshpm\": pod busybox-7dff88458-lshpm is already assigned to node \"ha-805293-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lshpm" node="ha-805293-m02"
	E0930 20:02:29.833910       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lshpm\": pod busybox-7dff88458-lshpm is already assigned to node \"ha-805293-m02\"" pod="default/busybox-7dff88458-lshpm"
	
	
	==> kubelet <==
	Sep 30 20:04:56 ha-805293 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 20:04:56 ha-805293 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 20:04:56 ha-805293 kubelet[1307]: E0930 20:04:56.831137    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726696830908263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:04:56 ha-805293 kubelet[1307]: E0930 20:04:56.831174    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726696830908263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:06 ha-805293 kubelet[1307]: E0930 20:05:06.833436    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726706832581949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:06 ha-805293 kubelet[1307]: E0930 20:05:06.834135    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726706832581949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:16 ha-805293 kubelet[1307]: E0930 20:05:16.840697    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726716835840638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:16 ha-805293 kubelet[1307]: E0930 20:05:16.841087    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726716835840638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:26 ha-805293 kubelet[1307]: E0930 20:05:26.843795    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726726842473695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:26 ha-805293 kubelet[1307]: E0930 20:05:26.843820    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726726842473695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:36 ha-805293 kubelet[1307]: E0930 20:05:36.846940    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726736846123824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:36 ha-805293 kubelet[1307]: E0930 20:05:36.847349    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726736846123824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:46 ha-805293 kubelet[1307]: E0930 20:05:46.849818    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726746849247125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:46 ha-805293 kubelet[1307]: E0930 20:05:46.850141    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726746849247125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:56 ha-805293 kubelet[1307]: E0930 20:05:56.740673    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 20:05:56 ha-805293 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 20:05:56 ha-805293 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 20:05:56 ha-805293 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 20:05:56 ha-805293 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 20:05:56 ha-805293 kubelet[1307]: E0930 20:05:56.852143    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726756851671468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:56 ha-805293 kubelet[1307]: E0930 20:05:56.852175    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726756851671468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:06 ha-805293 kubelet[1307]: E0930 20:06:06.854020    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726766853679089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:06 ha-805293 kubelet[1307]: E0930 20:06:06.854344    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726766853679089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:16 ha-805293 kubelet[1307]: E0930 20:06:16.857032    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726776856545104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:16 ha-805293 kubelet[1307]: E0930 20:06:16.857507    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726776856545104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-805293 -n ha-805293
helpers_test.go:261: (dbg) Run:  kubectl --context ha-805293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.174878679s)
ha_test.go:309: expected profile "ha-805293" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-805293\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-805293\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-805293\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.3\",\"Port\":8443,\"Kubernete
sVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.220\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.227\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.92\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"meta
llb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262
144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-805293 -n ha-805293
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-805293 logs -n 25: (1.368709726s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293:/home/docker/cp-test_ha-805293-m03_ha-805293.txt                       |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293 sudo cat                                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293.txt                                 |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m02:/home/docker/cp-test_ha-805293-m03_ha-805293-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m02 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04:/home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m04 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp testdata/cp-test.txt                                                | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3144947660/001/cp-test_ha-805293-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293:/home/docker/cp-test_ha-805293-m04_ha-805293.txt                       |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293 sudo cat                                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293.txt                                 |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m02:/home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m02 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03:/home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m03 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-805293 node stop m02 -v=7                                                     | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-805293 node start m02 -v=7                                                    | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 19:59:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 19:59:16.465113   26315 out.go:345] Setting OutFile to fd 1 ...
	I0930 19:59:16.465408   26315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:59:16.465418   26315 out.go:358] Setting ErrFile to fd 2...
	I0930 19:59:16.465423   26315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:59:16.465672   26315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 19:59:16.466270   26315 out.go:352] Setting JSON to false
	I0930 19:59:16.467246   26315 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2499,"bootTime":1727723857,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 19:59:16.467349   26315 start.go:139] virtualization: kvm guest
	I0930 19:59:16.469778   26315 out.go:177] * [ha-805293] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 19:59:16.471083   26315 notify.go:220] Checking for updates...
	I0930 19:59:16.471129   26315 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 19:59:16.472574   26315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 19:59:16.474040   26315 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:59:16.475378   26315 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:59:16.476781   26315 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 19:59:16.478196   26315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 19:59:16.479555   26315 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 19:59:16.514287   26315 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 19:59:16.515592   26315 start.go:297] selected driver: kvm2
	I0930 19:59:16.515604   26315 start.go:901] validating driver "kvm2" against <nil>
	I0930 19:59:16.515615   26315 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 19:59:16.516299   26315 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:59:16.516372   26315 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 19:59:16.531012   26315 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 19:59:16.531063   26315 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 19:59:16.531292   26315 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 19:59:16.531318   26315 cni.go:84] Creating CNI manager for ""
	I0930 19:59:16.531357   26315 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0930 19:59:16.531370   26315 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 19:59:16.531430   26315 start.go:340] cluster config:
	{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0930 19:59:16.531545   26315 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:59:16.533673   26315 out.go:177] * Starting "ha-805293" primary control-plane node in "ha-805293" cluster
	I0930 19:59:16.534957   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:59:16.535009   26315 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 19:59:16.535023   26315 cache.go:56] Caching tarball of preloaded images
	I0930 19:59:16.535111   26315 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 19:59:16.535121   26315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 19:59:16.535489   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 19:59:16.535515   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json: {Name:mk695bb0575a50d6b6d53e3d2c18bb8666421806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:16.535704   26315 start.go:360] acquireMachinesLock for ha-805293: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 19:59:16.535734   26315 start.go:364] duration metric: took 15.84µs to acquireMachinesLock for "ha-805293"
	I0930 19:59:16.535751   26315 start.go:93] Provisioning new machine with config: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 19:59:16.535821   26315 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 19:59:16.537498   26315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 19:59:16.537633   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:59:16.537678   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:59:16.552377   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I0930 19:59:16.552824   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:59:16.553523   26315 main.go:141] libmachine: Using API Version  1
	I0930 19:59:16.553548   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:59:16.553949   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:59:16.554153   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:16.554354   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:16.554484   26315 start.go:159] libmachine.API.Create for "ha-805293" (driver="kvm2")
	I0930 19:59:16.554517   26315 client.go:168] LocalClient.Create starting
	I0930 19:59:16.554565   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 19:59:16.554602   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 19:59:16.554620   26315 main.go:141] libmachine: Parsing certificate...
	I0930 19:59:16.554688   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 19:59:16.554716   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 19:59:16.554736   26315 main.go:141] libmachine: Parsing certificate...
	I0930 19:59:16.554758   26315 main.go:141] libmachine: Running pre-create checks...
	I0930 19:59:16.554770   26315 main.go:141] libmachine: (ha-805293) Calling .PreCreateCheck
	I0930 19:59:16.555128   26315 main.go:141] libmachine: (ha-805293) Calling .GetConfigRaw
	I0930 19:59:16.555744   26315 main.go:141] libmachine: Creating machine...
	I0930 19:59:16.555765   26315 main.go:141] libmachine: (ha-805293) Calling .Create
	I0930 19:59:16.555931   26315 main.go:141] libmachine: (ha-805293) Creating KVM machine...
	I0930 19:59:16.557277   26315 main.go:141] libmachine: (ha-805293) DBG | found existing default KVM network
	I0930 19:59:16.557963   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:16.557842   26338 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I0930 19:59:16.558012   26315 main.go:141] libmachine: (ha-805293) DBG | created network xml: 
	I0930 19:59:16.558024   26315 main.go:141] libmachine: (ha-805293) DBG | <network>
	I0930 19:59:16.558032   26315 main.go:141] libmachine: (ha-805293) DBG |   <name>mk-ha-805293</name>
	I0930 19:59:16.558037   26315 main.go:141] libmachine: (ha-805293) DBG |   <dns enable='no'/>
	I0930 19:59:16.558041   26315 main.go:141] libmachine: (ha-805293) DBG |   
	I0930 19:59:16.558052   26315 main.go:141] libmachine: (ha-805293) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0930 19:59:16.558057   26315 main.go:141] libmachine: (ha-805293) DBG |     <dhcp>
	I0930 19:59:16.558063   26315 main.go:141] libmachine: (ha-805293) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0930 19:59:16.558073   26315 main.go:141] libmachine: (ha-805293) DBG |     </dhcp>
	I0930 19:59:16.558087   26315 main.go:141] libmachine: (ha-805293) DBG |   </ip>
	I0930 19:59:16.558111   26315 main.go:141] libmachine: (ha-805293) DBG |   
	I0930 19:59:16.558145   26315 main.go:141] libmachine: (ha-805293) DBG | </network>
	I0930 19:59:16.558156   26315 main.go:141] libmachine: (ha-805293) DBG | 
	I0930 19:59:16.563671   26315 main.go:141] libmachine: (ha-805293) DBG | trying to create private KVM network mk-ha-805293 192.168.39.0/24...
	I0930 19:59:16.628841   26315 main.go:141] libmachine: (ha-805293) DBG | private KVM network mk-ha-805293 192.168.39.0/24 created
	I0930 19:59:16.628870   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:16.628827   26338 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:59:16.628892   26315 main.go:141] libmachine: (ha-805293) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293 ...
	I0930 19:59:16.628909   26315 main.go:141] libmachine: (ha-805293) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 19:59:16.629064   26315 main.go:141] libmachine: (ha-805293) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 19:59:16.879937   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:16.879799   26338 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa...
	I0930 19:59:17.039302   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:17.039101   26338 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/ha-805293.rawdisk...
	I0930 19:59:17.039341   26315 main.go:141] libmachine: (ha-805293) DBG | Writing magic tar header
	I0930 19:59:17.039359   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293 (perms=drwx------)
	I0930 19:59:17.039382   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 19:59:17.039389   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 19:59:17.039398   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 19:59:17.039404   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 19:59:17.039415   26315 main.go:141] libmachine: (ha-805293) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 19:59:17.039420   26315 main.go:141] libmachine: (ha-805293) Creating domain...
	I0930 19:59:17.039450   26315 main.go:141] libmachine: (ha-805293) DBG | Writing SSH key tar header
	I0930 19:59:17.039468   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:17.039218   26338 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293 ...
	I0930 19:59:17.039478   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293
	I0930 19:59:17.039485   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 19:59:17.039546   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:59:17.039570   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 19:59:17.039613   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 19:59:17.039667   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home/jenkins
	I0930 19:59:17.039707   26315 main.go:141] libmachine: (ha-805293) DBG | Checking permissions on dir: /home
	I0930 19:59:17.039720   26315 main.go:141] libmachine: (ha-805293) DBG | Skipping /home - not owner
	I0930 19:59:17.040595   26315 main.go:141] libmachine: (ha-805293) define libvirt domain using xml: 
	I0930 19:59:17.040607   26315 main.go:141] libmachine: (ha-805293) <domain type='kvm'>
	I0930 19:59:17.040612   26315 main.go:141] libmachine: (ha-805293)   <name>ha-805293</name>
	I0930 19:59:17.040617   26315 main.go:141] libmachine: (ha-805293)   <memory unit='MiB'>2200</memory>
	I0930 19:59:17.040621   26315 main.go:141] libmachine: (ha-805293)   <vcpu>2</vcpu>
	I0930 19:59:17.040625   26315 main.go:141] libmachine: (ha-805293)   <features>
	I0930 19:59:17.040630   26315 main.go:141] libmachine: (ha-805293)     <acpi/>
	I0930 19:59:17.040633   26315 main.go:141] libmachine: (ha-805293)     <apic/>
	I0930 19:59:17.040638   26315 main.go:141] libmachine: (ha-805293)     <pae/>
	I0930 19:59:17.040642   26315 main.go:141] libmachine: (ha-805293)     
	I0930 19:59:17.040649   26315 main.go:141] libmachine: (ha-805293)   </features>
	I0930 19:59:17.040654   26315 main.go:141] libmachine: (ha-805293)   <cpu mode='host-passthrough'>
	I0930 19:59:17.040661   26315 main.go:141] libmachine: (ha-805293)   
	I0930 19:59:17.040664   26315 main.go:141] libmachine: (ha-805293)   </cpu>
	I0930 19:59:17.040671   26315 main.go:141] libmachine: (ha-805293)   <os>
	I0930 19:59:17.040675   26315 main.go:141] libmachine: (ha-805293)     <type>hvm</type>
	I0930 19:59:17.040680   26315 main.go:141] libmachine: (ha-805293)     <boot dev='cdrom'/>
	I0930 19:59:17.040692   26315 main.go:141] libmachine: (ha-805293)     <boot dev='hd'/>
	I0930 19:59:17.040703   26315 main.go:141] libmachine: (ha-805293)     <bootmenu enable='no'/>
	I0930 19:59:17.040714   26315 main.go:141] libmachine: (ha-805293)   </os>
	I0930 19:59:17.040724   26315 main.go:141] libmachine: (ha-805293)   <devices>
	I0930 19:59:17.040732   26315 main.go:141] libmachine: (ha-805293)     <disk type='file' device='cdrom'>
	I0930 19:59:17.040739   26315 main.go:141] libmachine: (ha-805293)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/boot2docker.iso'/>
	I0930 19:59:17.040757   26315 main.go:141] libmachine: (ha-805293)       <target dev='hdc' bus='scsi'/>
	I0930 19:59:17.040766   26315 main.go:141] libmachine: (ha-805293)       <readonly/>
	I0930 19:59:17.040770   26315 main.go:141] libmachine: (ha-805293)     </disk>
	I0930 19:59:17.040776   26315 main.go:141] libmachine: (ha-805293)     <disk type='file' device='disk'>
	I0930 19:59:17.040783   26315 main.go:141] libmachine: (ha-805293)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 19:59:17.040791   26315 main.go:141] libmachine: (ha-805293)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/ha-805293.rawdisk'/>
	I0930 19:59:17.040797   26315 main.go:141] libmachine: (ha-805293)       <target dev='hda' bus='virtio'/>
	I0930 19:59:17.040802   26315 main.go:141] libmachine: (ha-805293)     </disk>
	I0930 19:59:17.040808   26315 main.go:141] libmachine: (ha-805293)     <interface type='network'>
	I0930 19:59:17.040814   26315 main.go:141] libmachine: (ha-805293)       <source network='mk-ha-805293'/>
	I0930 19:59:17.040822   26315 main.go:141] libmachine: (ha-805293)       <model type='virtio'/>
	I0930 19:59:17.040829   26315 main.go:141] libmachine: (ha-805293)     </interface>
	I0930 19:59:17.040833   26315 main.go:141] libmachine: (ha-805293)     <interface type='network'>
	I0930 19:59:17.040840   26315 main.go:141] libmachine: (ha-805293)       <source network='default'/>
	I0930 19:59:17.040844   26315 main.go:141] libmachine: (ha-805293)       <model type='virtio'/>
	I0930 19:59:17.040850   26315 main.go:141] libmachine: (ha-805293)     </interface>
	I0930 19:59:17.040855   26315 main.go:141] libmachine: (ha-805293)     <serial type='pty'>
	I0930 19:59:17.040860   26315 main.go:141] libmachine: (ha-805293)       <target port='0'/>
	I0930 19:59:17.040865   26315 main.go:141] libmachine: (ha-805293)     </serial>
	I0930 19:59:17.040871   26315 main.go:141] libmachine: (ha-805293)     <console type='pty'>
	I0930 19:59:17.040877   26315 main.go:141] libmachine: (ha-805293)       <target type='serial' port='0'/>
	I0930 19:59:17.040882   26315 main.go:141] libmachine: (ha-805293)     </console>
	I0930 19:59:17.040888   26315 main.go:141] libmachine: (ha-805293)     <rng model='virtio'>
	I0930 19:59:17.040894   26315 main.go:141] libmachine: (ha-805293)       <backend model='random'>/dev/random</backend>
	I0930 19:59:17.040901   26315 main.go:141] libmachine: (ha-805293)     </rng>
	I0930 19:59:17.040907   26315 main.go:141] libmachine: (ha-805293)     
	I0930 19:59:17.040917   26315 main.go:141] libmachine: (ha-805293)     
	I0930 19:59:17.040925   26315 main.go:141] libmachine: (ha-805293)   </devices>
	I0930 19:59:17.040928   26315 main.go:141] libmachine: (ha-805293) </domain>
	I0930 19:59:17.040937   26315 main.go:141] libmachine: (ha-805293) 
	I0930 19:59:17.045576   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:16:26:46 in network default
	I0930 19:59:17.046091   26315 main.go:141] libmachine: (ha-805293) Ensuring networks are active...
	I0930 19:59:17.046110   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:17.046918   26315 main.go:141] libmachine: (ha-805293) Ensuring network default is active
	I0930 19:59:17.047170   26315 main.go:141] libmachine: (ha-805293) Ensuring network mk-ha-805293 is active
	I0930 19:59:17.048069   26315 main.go:141] libmachine: (ha-805293) Getting domain xml...
	I0930 19:59:17.048925   26315 main.go:141] libmachine: (ha-805293) Creating domain...
	I0930 19:59:18.262935   26315 main.go:141] libmachine: (ha-805293) Waiting to get IP...
	I0930 19:59:18.263713   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:18.264097   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:18.264150   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:18.264077   26338 retry.go:31] will retry after 272.130038ms: waiting for machine to come up
	I0930 19:59:18.537624   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:18.538207   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:18.538236   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:18.538152   26338 retry.go:31] will retry after 384.976128ms: waiting for machine to come up
	I0930 19:59:18.924813   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:18.925224   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:18.925244   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:18.925193   26338 retry.go:31] will retry after 439.036671ms: waiting for machine to come up
	I0930 19:59:19.365792   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:19.366237   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:19.366268   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:19.366201   26338 retry.go:31] will retry after 523.251996ms: waiting for machine to come up
	I0930 19:59:19.890884   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:19.891377   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:19.891399   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:19.891276   26338 retry.go:31] will retry after 505.591634ms: waiting for machine to come up
	I0930 19:59:20.398064   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:20.398495   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:20.398518   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:20.398434   26338 retry.go:31] will retry after 840.243199ms: waiting for machine to come up
	I0930 19:59:21.240528   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:21.240974   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:21.241011   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:21.240928   26338 retry.go:31] will retry after 727.422374ms: waiting for machine to come up
	I0930 19:59:21.970399   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:21.970994   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:21.971027   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:21.970937   26338 retry.go:31] will retry after 1.250553906s: waiting for machine to come up
	I0930 19:59:23.223257   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:23.223588   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:23.223617   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:23.223524   26338 retry.go:31] will retry after 1.498180761s: waiting for machine to come up
	I0930 19:59:24.724089   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:24.724526   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:24.724547   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:24.724490   26338 retry.go:31] will retry after 1.710980244s: waiting for machine to come up
	I0930 19:59:26.437365   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:26.437733   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:26.437791   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:26.437707   26338 retry.go:31] will retry after 1.996131833s: waiting for machine to come up
	I0930 19:59:28.435394   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:28.435899   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:28.435920   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:28.435854   26338 retry.go:31] will retry after 2.313700889s: waiting for machine to come up
	I0930 19:59:30.752853   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:30.753113   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:30.753140   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:30.753096   26338 retry.go:31] will retry after 2.892875975s: waiting for machine to come up
	I0930 19:59:33.648697   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:33.649006   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find current IP address of domain ha-805293 in network mk-ha-805293
	I0930 19:59:33.649067   26315 main.go:141] libmachine: (ha-805293) DBG | I0930 19:59:33.648958   26338 retry.go:31] will retry after 4.162794884s: waiting for machine to come up
	I0930 19:59:37.813324   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.813940   26315 main.go:141] libmachine: (ha-805293) Found IP for machine: 192.168.39.3
	I0930 19:59:37.813967   26315 main.go:141] libmachine: (ha-805293) Reserving static IP address...
	I0930 19:59:37.813980   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has current primary IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.814363   26315 main.go:141] libmachine: (ha-805293) DBG | unable to find host DHCP lease matching {name: "ha-805293", mac: "52:54:00:a8:b8:c7", ip: "192.168.39.3"} in network mk-ha-805293
	I0930 19:59:37.894677   26315 main.go:141] libmachine: (ha-805293) DBG | Getting to WaitForSSH function...
	I0930 19:59:37.894706   26315 main.go:141] libmachine: (ha-805293) Reserved static IP address: 192.168.39.3
	I0930 19:59:37.894719   26315 main.go:141] libmachine: (ha-805293) Waiting for SSH to be available...
	I0930 19:59:37.897595   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.897922   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:37.897956   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:37.898087   26315 main.go:141] libmachine: (ha-805293) DBG | Using SSH client type: external
	I0930 19:59:37.898106   26315 main.go:141] libmachine: (ha-805293) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa (-rw-------)
	I0930 19:59:37.898139   26315 main.go:141] libmachine: (ha-805293) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 19:59:37.898155   26315 main.go:141] libmachine: (ha-805293) DBG | About to run SSH command:
	I0930 19:59:37.898169   26315 main.go:141] libmachine: (ha-805293) DBG | exit 0
	I0930 19:59:38.031893   26315 main.go:141] libmachine: (ha-805293) DBG | SSH cmd err, output: <nil>: 
	I0930 19:59:38.032180   26315 main.go:141] libmachine: (ha-805293) KVM machine creation complete!
	I0930 19:59:38.032650   26315 main.go:141] libmachine: (ha-805293) Calling .GetConfigRaw
	I0930 19:59:38.033332   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:38.033535   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:38.033703   26315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 19:59:38.033722   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 19:59:38.035148   26315 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 19:59:38.035166   26315 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 19:59:38.035171   26315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 19:59:38.035176   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.037430   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.037779   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.037807   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.037886   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.038058   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.038172   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.038292   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.038466   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.038732   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.038742   26315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 19:59:38.150707   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:59:38.150736   26315 main.go:141] libmachine: Detecting the provisioner...
	I0930 19:59:38.150744   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.153577   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.153985   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.154015   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.154165   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.154420   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.154616   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.154796   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.154961   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.155144   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.155155   26315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 19:59:38.268071   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 19:59:38.268223   26315 main.go:141] libmachine: found compatible host: buildroot
	I0930 19:59:38.268235   26315 main.go:141] libmachine: Provisioning with buildroot...
	I0930 19:59:38.268248   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:38.268485   26315 buildroot.go:166] provisioning hostname "ha-805293"
	I0930 19:59:38.268519   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:38.268699   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.271029   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.271351   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.271376   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.271551   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.271727   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.271905   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.272048   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.272215   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.272420   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.272431   26315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293 && echo "ha-805293" | sudo tee /etc/hostname
	I0930 19:59:38.397989   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293
	
	I0930 19:59:38.398019   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.401388   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.401792   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.401818   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.402043   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.402262   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.402446   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.402640   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.402835   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.403014   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.403030   26315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 19:59:38.523981   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 19:59:38.524025   26315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 19:59:38.524082   26315 buildroot.go:174] setting up certificates
	I0930 19:59:38.524097   26315 provision.go:84] configureAuth start
	I0930 19:59:38.524111   26315 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 19:59:38.524383   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:38.527277   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.527630   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.527658   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.527836   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.530619   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.530940   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.530964   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.531100   26315 provision.go:143] copyHostCerts
	I0930 19:59:38.531123   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 19:59:38.531167   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 19:59:38.531177   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 19:59:38.531239   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 19:59:38.531347   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 19:59:38.531367   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 19:59:38.531371   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 19:59:38.531397   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 19:59:38.531451   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 19:59:38.531467   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 19:59:38.531473   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 19:59:38.531511   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 19:59:38.531604   26315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293 san=[127.0.0.1 192.168.39.3 ha-805293 localhost minikube]
	I0930 19:59:38.676763   26315 provision.go:177] copyRemoteCerts
	I0930 19:59:38.676824   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 19:59:38.676847   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.679571   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.680006   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.680032   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.680205   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.680392   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.680556   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.680720   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:38.765532   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 19:59:38.765609   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 19:59:38.789748   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 19:59:38.789818   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 19:59:38.811783   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 19:59:38.811868   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 19:59:38.834125   26315 provision.go:87] duration metric: took 310.01212ms to configureAuth
	I0930 19:59:38.834160   26315 buildroot.go:189] setting minikube options for container-runtime
	I0930 19:59:38.834431   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:59:38.834524   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:38.837303   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.837631   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:38.837775   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:38.838052   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:38.838232   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.838399   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:38.838530   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:38.838676   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:38.838897   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:38.838918   26315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 19:59:39.069352   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 19:59:39.069381   26315 main.go:141] libmachine: Checking connection to Docker...
	I0930 19:59:39.069395   26315 main.go:141] libmachine: (ha-805293) Calling .GetURL
	I0930 19:59:39.070641   26315 main.go:141] libmachine: (ha-805293) DBG | Using libvirt version 6000000
	I0930 19:59:39.073164   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.073482   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.073521   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.073664   26315 main.go:141] libmachine: Docker is up and running!
	I0930 19:59:39.073675   26315 main.go:141] libmachine: Reticulating splines...
	I0930 19:59:39.073688   26315 client.go:171] duration metric: took 22.519163927s to LocalClient.Create
	I0930 19:59:39.073710   26315 start.go:167] duration metric: took 22.519226404s to libmachine.API.Create "ha-805293"
	I0930 19:59:39.073725   26315 start.go:293] postStartSetup for "ha-805293" (driver="kvm2")
	I0930 19:59:39.073739   26315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 19:59:39.073759   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.073979   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 19:59:39.074068   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.076481   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.076820   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.076872   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.076969   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.077131   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.077256   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.077345   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:39.162144   26315 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 19:59:39.166524   26315 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 19:59:39.166551   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 19:59:39.166625   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 19:59:39.166691   26315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 19:59:39.166701   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 19:59:39.166826   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 19:59:39.175862   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 19:59:39.198495   26315 start.go:296] duration metric: took 124.748363ms for postStartSetup
	I0930 19:59:39.198552   26315 main.go:141] libmachine: (ha-805293) Calling .GetConfigRaw
	I0930 19:59:39.199175   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:39.202045   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.202447   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.202472   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.202702   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 19:59:39.202915   26315 start.go:128] duration metric: took 22.667085053s to createHost
	I0930 19:59:39.202950   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.205157   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.205495   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.205516   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.205668   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.205846   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.205981   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.206111   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.206270   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 19:59:39.206542   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 19:59:39.206565   26315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 19:59:39.320050   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727726379.295271539
	
	I0930 19:59:39.320076   26315 fix.go:216] guest clock: 1727726379.295271539
	I0930 19:59:39.320086   26315 fix.go:229] Guest: 2024-09-30 19:59:39.295271539 +0000 UTC Remote: 2024-09-30 19:59:39.202937168 +0000 UTC m=+22.774027114 (delta=92.334371ms)
	I0930 19:59:39.320118   26315 fix.go:200] guest clock delta is within tolerance: 92.334371ms
	I0930 19:59:39.320128   26315 start.go:83] releasing machines lock for "ha-805293", held for 22.784384982s
	I0930 19:59:39.320156   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.320464   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:39.323340   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.323749   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.323763   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.323980   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.324511   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.324710   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 19:59:39.324873   26315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 19:59:39.324922   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.324933   26315 ssh_runner.go:195] Run: cat /version.json
	I0930 19:59:39.324953   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 19:59:39.327479   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.327790   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.327833   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.327954   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.327975   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.328205   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.328371   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:39.328394   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:39.328435   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.328560   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 19:59:39.328620   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:39.328752   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 19:59:39.328910   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 19:59:39.329053   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 19:59:39.449869   26315 ssh_runner.go:195] Run: systemctl --version
	I0930 19:59:39.457140   26315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 19:59:39.620534   26315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 19:59:39.626812   26315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 19:59:39.626884   26315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 19:59:39.643150   26315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 19:59:39.643182   26315 start.go:495] detecting cgroup driver to use...
	I0930 19:59:39.643259   26315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 19:59:39.659582   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 19:59:39.673481   26315 docker.go:217] disabling cri-docker service (if available) ...
	I0930 19:59:39.673546   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 19:59:39.687166   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 19:59:39.700766   26315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 19:59:39.817845   26315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 19:59:39.989160   26315 docker.go:233] disabling docker service ...
	I0930 19:59:39.989251   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 19:59:40.003138   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 19:59:40.016004   26315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 19:59:40.149065   26315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 19:59:40.264254   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 19:59:40.278167   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 19:59:40.296364   26315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 19:59:40.296421   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.306661   26315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 19:59:40.306731   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.317138   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.327466   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.337951   26315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 19:59:40.348585   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.358684   26315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.375315   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 19:59:40.385587   26315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 19:59:40.394996   26315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 19:59:40.395092   26315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 19:59:40.408121   26315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 19:59:40.417783   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:59:40.532464   26315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 19:59:40.627203   26315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 19:59:40.627277   26315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 19:59:40.632142   26315 start.go:563] Will wait 60s for crictl version
	I0930 19:59:40.632198   26315 ssh_runner.go:195] Run: which crictl
	I0930 19:59:40.635892   26315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 19:59:40.673372   26315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 19:59:40.673453   26315 ssh_runner.go:195] Run: crio --version
	I0930 19:59:40.701810   26315 ssh_runner.go:195] Run: crio --version
	I0930 19:59:40.733603   26315 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 19:59:40.734810   26315 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 19:59:40.737789   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:40.738162   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 19:59:40.738188   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 19:59:40.738414   26315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 19:59:40.742812   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:59:40.755762   26315 kubeadm.go:883] updating cluster {Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 19:59:40.755880   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:59:40.755941   26315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:59:40.795843   26315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 19:59:40.795919   26315 ssh_runner.go:195] Run: which lz4
	I0930 19:59:40.799847   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0930 19:59:40.799948   26315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 19:59:40.803954   26315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 19:59:40.803978   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 19:59:42.086885   26315 crio.go:462] duration metric: took 1.286971524s to copy over tarball
	I0930 19:59:42.086956   26315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 19:59:44.140911   26315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.053919148s)
	I0930 19:59:44.140946   26315 crio.go:469] duration metric: took 2.054033393s to extract the tarball
	I0930 19:59:44.140956   26315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 19:59:44.176934   26315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 19:59:44.223432   26315 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 19:59:44.223453   26315 cache_images.go:84] Images are preloaded, skipping loading
	I0930 19:59:44.223463   26315 kubeadm.go:934] updating node { 192.168.39.3 8443 v1.31.1 crio true true} ...
	I0930 19:59:44.223618   26315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 19:59:44.223687   26315 ssh_runner.go:195] Run: crio config
	I0930 19:59:44.267892   26315 cni.go:84] Creating CNI manager for ""
	I0930 19:59:44.267913   26315 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 19:59:44.267927   26315 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 19:59:44.267969   26315 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-805293 NodeName:ha-805293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 19:59:44.268143   26315 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-805293"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 19:59:44.268174   26315 kube-vip.go:115] generating kube-vip config ...
	I0930 19:59:44.268226   26315 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 19:59:44.290057   26315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 19:59:44.290186   26315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0930 19:59:44.290252   26315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 19:59:44.300619   26315 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 19:59:44.300694   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 19:59:44.312702   26315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0930 19:59:44.329980   26315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 19:59:44.347106   26315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0930 19:59:44.363429   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0930 19:59:44.379706   26315 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 19:59:44.383786   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 19:59:44.396392   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 19:59:44.511834   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 19:59:44.528890   26315 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.3
	I0930 19:59:44.528918   26315 certs.go:194] generating shared ca certs ...
	I0930 19:59:44.528990   26315 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.529203   26315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 19:59:44.529261   26315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 19:59:44.529273   26315 certs.go:256] generating profile certs ...
	I0930 19:59:44.529338   26315 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 19:59:44.529377   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt with IP's: []
	I0930 19:59:44.693203   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt ...
	I0930 19:59:44.693232   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt: {Name:mk4ee04dd06bd91d73f7f1298e33968b422b097c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.693403   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key ...
	I0930 19:59:44.693413   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key: {Name:mk2b8ad6c09983ddb0203e6dca1df4008d2fe717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.693487   26315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78
	I0930 19:59:44.693501   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.254]
	I0930 19:59:44.767682   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78 ...
	I0930 19:59:44.767709   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78: {Name:mkf1b16d36ab45268d051f89cfe928869656e760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.767864   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78 ...
	I0930 19:59:44.767875   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78: {Name:mk53eca62135b4c1b261b7c937012d89f293e976 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:44.767944   26315 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1b433d78 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 19:59:44.768026   26315 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1b433d78 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 19:59:44.768082   26315 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 19:59:44.768096   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt with IP's: []
	I0930 19:59:45.223535   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt ...
	I0930 19:59:45.223567   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt: {Name:mke738cc3ccc573243158c6f5e5f022828f32c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:45.223723   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key ...
	I0930 19:59:45.223733   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key: {Name:mkbfe8ac8fc7a409b1152c27d19ceb3cdc436834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:59:45.223814   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 19:59:45.223831   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 19:59:45.223844   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 19:59:45.223854   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 19:59:45.223865   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 19:59:45.223889   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 19:59:45.223908   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 19:59:45.223920   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 19:59:45.223964   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 19:59:45.224006   26315 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 19:59:45.224013   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 19:59:45.224036   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 19:59:45.224057   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 19:59:45.224083   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 19:59:45.224119   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 19:59:45.224143   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.224156   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.224168   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.224809   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 19:59:45.251773   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 19:59:45.283221   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 19:59:45.307169   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 19:59:45.340795   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 19:59:45.364921   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 19:59:45.388786   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 19:59:45.412412   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 19:59:45.437530   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 19:59:45.462538   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 19:59:45.486247   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 19:59:45.510070   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 19:59:45.527040   26315 ssh_runner.go:195] Run: openssl version
	I0930 19:59:45.532953   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 19:59:45.544314   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.548732   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.548808   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 19:59:45.554737   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 19:59:45.565237   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 19:59:45.576275   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.580833   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.580899   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 19:59:45.586723   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 19:59:45.597151   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 19:59:45.607829   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.612479   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.612538   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 19:59:45.618560   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 19:59:45.629886   26315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 19:59:45.634469   26315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 19:59:45.634548   26315 kubeadm.go:392] StartCluster: {Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:59:45.634646   26315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 19:59:45.634717   26315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 19:59:45.672608   26315 cri.go:89] found id: ""
	I0930 19:59:45.672680   26315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 19:59:45.682253   26315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 19:59:45.695746   26315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 19:59:45.707747   26315 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 19:59:45.707771   26315 kubeadm.go:157] found existing configuration files:
	
	I0930 19:59:45.707824   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 19:59:45.717218   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 19:59:45.717271   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 19:59:45.727134   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 19:59:45.736453   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 19:59:45.736514   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 19:59:45.746137   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 19:59:45.755226   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 19:59:45.755300   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 19:59:45.765188   26315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 19:59:45.774772   26315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 19:59:45.774830   26315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 19:59:45.784513   26315 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 19:59:45.891942   26315 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 19:59:45.891997   26315 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 19:59:45.998241   26315 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 19:59:45.998404   26315 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 19:59:45.998552   26315 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 19:59:46.014075   26315 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 19:59:46.112806   26315 out.go:235]   - Generating certificates and keys ...
	I0930 19:59:46.112955   26315 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 19:59:46.113026   26315 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 19:59:46.210951   26315 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 19:59:46.354582   26315 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 19:59:46.555785   26315 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 19:59:46.646311   26315 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 19:59:46.770735   26315 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 19:59:46.770873   26315 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-805293 localhost] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0930 19:59:47.044600   26315 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 19:59:47.044796   26315 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-805293 localhost] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0930 19:59:47.135575   26315 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 19:59:47.309550   26315 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 19:59:47.407346   26315 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 19:59:47.407491   26315 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 19:59:47.782301   26315 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 19:59:47.938840   26315 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 19:59:48.153368   26315 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 19:59:48.373848   26315 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 19:59:48.924719   26315 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 19:59:48.925435   26315 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 19:59:48.929527   26315 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 19:59:48.931731   26315 out.go:235]   - Booting up control plane ...
	I0930 19:59:48.931901   26315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 19:59:48.931984   26315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 19:59:48.932610   26315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 19:59:48.952672   26315 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 19:59:48.959981   26315 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 19:59:48.960193   26315 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 19:59:49.095726   26315 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 19:59:49.095850   26315 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 19:59:49.596721   26315 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.116798ms
	I0930 19:59:49.596826   26315 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 19:59:55.702855   26315 kubeadm.go:310] [api-check] The API server is healthy after 6.110016436s
	I0930 19:59:55.715163   26315 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 19:59:55.739975   26315 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 19:59:56.278812   26315 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 19:59:56.279051   26315 kubeadm.go:310] [mark-control-plane] Marking the node ha-805293 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 19:59:56.293005   26315 kubeadm.go:310] [bootstrap-token] Using token: p0s0d4.yc45k5nzuh1mipkz
	I0930 19:59:56.294535   26315 out.go:235]   - Configuring RBAC rules ...
	I0930 19:59:56.294681   26315 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 19:59:56.299474   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 19:59:56.308838   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 19:59:56.312908   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 19:59:56.320143   26315 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 19:59:56.328834   26315 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 19:59:56.351618   26315 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 19:59:56.617778   26315 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 19:59:57.116458   26315 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 19:59:57.116486   26315 kubeadm.go:310] 
	I0930 19:59:57.116560   26315 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 19:59:57.116570   26315 kubeadm.go:310] 
	I0930 19:59:57.116674   26315 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 19:59:57.116685   26315 kubeadm.go:310] 
	I0930 19:59:57.116719   26315 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 19:59:57.116823   26315 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 19:59:57.116882   26315 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 19:59:57.116886   26315 kubeadm.go:310] 
	I0930 19:59:57.116955   26315 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 19:59:57.116980   26315 kubeadm.go:310] 
	I0930 19:59:57.117053   26315 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 19:59:57.117064   26315 kubeadm.go:310] 
	I0930 19:59:57.117137   26315 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 19:59:57.117202   26315 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 19:59:57.117263   26315 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 19:59:57.117268   26315 kubeadm.go:310] 
	I0930 19:59:57.117377   26315 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 19:59:57.117490   26315 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 19:59:57.117501   26315 kubeadm.go:310] 
	I0930 19:59:57.117607   26315 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p0s0d4.yc45k5nzuh1mipkz \
	I0930 19:59:57.117749   26315 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 19:59:57.117783   26315 kubeadm.go:310] 	--control-plane 
	I0930 19:59:57.117789   26315 kubeadm.go:310] 
	I0930 19:59:57.117912   26315 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 19:59:57.117922   26315 kubeadm.go:310] 
	I0930 19:59:57.117993   26315 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p0s0d4.yc45k5nzuh1mipkz \
	I0930 19:59:57.118080   26315 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 19:59:57.119219   26315 kubeadm.go:310] W0930 19:59:45.871969     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:59:57.119559   26315 kubeadm.go:310] W0930 19:59:45.872918     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 19:59:57.119653   26315 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 19:59:57.119676   26315 cni.go:84] Creating CNI manager for ""
	I0930 19:59:57.119684   26315 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 19:59:57.121508   26315 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0930 19:59:57.122778   26315 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0930 19:59:57.129018   26315 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0930 19:59:57.129033   26315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0930 19:59:57.148058   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0930 19:59:57.490355   26315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 19:59:57.490415   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:57.490422   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-805293 minikube.k8s.io/updated_at=2024_09_30T19_59_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=ha-805293 minikube.k8s.io/primary=true
	I0930 19:59:57.530433   26315 ops.go:34] apiserver oom_adj: -16
	I0930 19:59:57.632942   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:58.133232   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:58.633968   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:59.133876   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 19:59:59.633715   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:00.134062   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:00.633798   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:01.133378   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 20:00:01.219465   26315 kubeadm.go:1113] duration metric: took 3.729111543s to wait for elevateKubeSystemPrivileges
	I0930 20:00:01.219521   26315 kubeadm.go:394] duration metric: took 15.584976844s to StartCluster
	I0930 20:00:01.219559   26315 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:01.219656   26315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:00:01.220437   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:01.220719   26315 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:01.220739   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 20:00:01.220750   26315 start.go:241] waiting for startup goroutines ...
	I0930 20:00:01.220771   26315 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 20:00:01.220861   26315 addons.go:69] Setting storage-provisioner=true in profile "ha-805293"
	I0930 20:00:01.220890   26315 addons.go:234] Setting addon storage-provisioner=true in "ha-805293"
	I0930 20:00:01.220907   26315 addons.go:69] Setting default-storageclass=true in profile "ha-805293"
	I0930 20:00:01.220929   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:01.220943   26315 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-805293"
	I0930 20:00:01.220958   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:01.221373   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.221421   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.221455   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.221495   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.237192   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38991
	I0930 20:00:01.237232   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44093
	I0930 20:00:01.237724   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.237776   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.238255   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.238280   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.238371   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.238394   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.238662   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.238738   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.238902   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:01.239184   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.239227   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.241145   26315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:00:01.241484   26315 kapi.go:59] client config for ha-805293: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 20:00:01.242040   26315 cert_rotation.go:140] Starting client certificate rotation controller
	I0930 20:00:01.242321   26315 addons.go:234] Setting addon default-storageclass=true in "ha-805293"
	I0930 20:00:01.242364   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:01.242753   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.242800   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.255454   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34783
	I0930 20:00:01.255998   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.256626   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.256655   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.257008   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.257244   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:01.258602   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38221
	I0930 20:00:01.259101   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.259492   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:01.259705   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.259732   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.260119   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.260656   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.260698   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.261796   26315 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:00:01.263230   26315 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 20:00:01.263251   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 20:00:01.263275   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:01.266511   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.266953   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:01.266979   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.267159   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:01.267342   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:01.267495   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:01.267640   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:01.276774   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42613
	I0930 20:00:01.277256   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.277779   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.277808   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.278167   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.278348   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:01.279998   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:01.280191   26315 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 20:00:01.280204   26315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 20:00:01.280218   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:01.282743   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.283181   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:01.283205   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:01.283377   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:01.283566   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:01.283719   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:01.283866   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:01.308679   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 20:00:01.431260   26315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 20:00:01.433924   26315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 20:00:01.558490   26315 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0930 20:00:01.621587   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.621614   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.621883   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.621900   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.621908   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.621931   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.621995   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.622217   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.622234   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.622247   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.622328   26315 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 20:00:01.622377   26315 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 20:00:01.622485   26315 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0930 20:00:01.622496   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:01.622504   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:01.622508   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:01.630544   26315 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 20:00:01.631089   26315 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0930 20:00:01.631103   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:01.631110   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:01.631115   26315 round_trippers.go:473]     Content-Type: application/json
	I0930 20:00:01.631119   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:01.636731   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:00:01.636889   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.636905   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.637222   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.637249   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.637227   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.910454   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.910493   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.910790   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.910900   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.910916   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.910928   26315 main.go:141] libmachine: Making call to close driver server
	I0930 20:00:01.910933   26315 main.go:141] libmachine: (ha-805293) Calling .Close
	I0930 20:00:01.911215   26315 main.go:141] libmachine: (ha-805293) DBG | Closing plugin on server side
	I0930 20:00:01.911245   26315 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:00:01.911255   26315 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:00:01.913341   26315 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0930 20:00:01.914640   26315 addons.go:510] duration metric: took 693.870653ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0930 20:00:01.914685   26315 start.go:246] waiting for cluster config update ...
	I0930 20:00:01.914700   26315 start.go:255] writing updated cluster config ...
	I0930 20:00:01.917528   26315 out.go:201] 
	I0930 20:00:01.919324   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:01.919441   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:00:01.921983   26315 out.go:177] * Starting "ha-805293-m02" control-plane node in "ha-805293" cluster
	I0930 20:00:01.923837   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:00:01.923877   26315 cache.go:56] Caching tarball of preloaded images
	I0930 20:00:01.924007   26315 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:00:01.924027   26315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:00:01.924140   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:00:01.924406   26315 start.go:360] acquireMachinesLock for ha-805293-m02: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:00:01.924476   26315 start.go:364] duration metric: took 42.723µs to acquireMachinesLock for "ha-805293-m02"
	I0930 20:00:01.924503   26315 start.go:93] Provisioning new machine with config: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:01.924602   26315 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0930 20:00:01.926254   26315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 20:00:01.926373   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:01.926422   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:01.942099   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43055
	I0930 20:00:01.942642   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:01.943165   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:01.943189   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:01.943522   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:01.943810   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:01.943943   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:01.944136   26315 start.go:159] libmachine.API.Create for "ha-805293" (driver="kvm2")
	I0930 20:00:01.944171   26315 client.go:168] LocalClient.Create starting
	I0930 20:00:01.944215   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 20:00:01.944259   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:00:01.944280   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:00:01.944361   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 20:00:01.944395   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:00:01.944410   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:00:01.944433   26315 main.go:141] libmachine: Running pre-create checks...
	I0930 20:00:01.944443   26315 main.go:141] libmachine: (ha-805293-m02) Calling .PreCreateCheck
	I0930 20:00:01.944614   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetConfigRaw
	I0930 20:00:01.945016   26315 main.go:141] libmachine: Creating machine...
	I0930 20:00:01.945030   26315 main.go:141] libmachine: (ha-805293-m02) Calling .Create
	I0930 20:00:01.945196   26315 main.go:141] libmachine: (ha-805293-m02) Creating KVM machine...
	I0930 20:00:01.946629   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found existing default KVM network
	I0930 20:00:01.946731   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found existing private KVM network mk-ha-805293
	I0930 20:00:01.946865   26315 main.go:141] libmachine: (ha-805293-m02) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02 ...
	I0930 20:00:01.946894   26315 main.go:141] libmachine: (ha-805293-m02) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 20:00:01.946988   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:01.946872   26664 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:00:01.947079   26315 main.go:141] libmachine: (ha-805293-m02) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 20:00:02.217368   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:02.217234   26664 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa...
	I0930 20:00:02.510082   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:02.509926   26664 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/ha-805293-m02.rawdisk...
	I0930 20:00:02.510127   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Writing magic tar header
	I0930 20:00:02.510145   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Writing SSH key tar header
	I0930 20:00:02.510158   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:02.510035   26664 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02 ...
	I0930 20:00:02.510175   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02
	I0930 20:00:02.510188   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 20:00:02.510199   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:00:02.510217   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02 (perms=drwx------)
	I0930 20:00:02.510229   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 20:00:02.510240   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 20:00:02.510255   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 20:00:02.510266   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 20:00:02.510281   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 20:00:02.510294   26315 main.go:141] libmachine: (ha-805293-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 20:00:02.510308   26315 main.go:141] libmachine: (ha-805293-m02) Creating domain...
	I0930 20:00:02.510328   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 20:00:02.510352   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home/jenkins
	I0930 20:00:02.510359   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Checking permissions on dir: /home
	I0930 20:00:02.510364   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Skipping /home - not owner
	I0930 20:00:02.511282   26315 main.go:141] libmachine: (ha-805293-m02) define libvirt domain using xml: 
	I0930 20:00:02.511306   26315 main.go:141] libmachine: (ha-805293-m02) <domain type='kvm'>
	I0930 20:00:02.511317   26315 main.go:141] libmachine: (ha-805293-m02)   <name>ha-805293-m02</name>
	I0930 20:00:02.511328   26315 main.go:141] libmachine: (ha-805293-m02)   <memory unit='MiB'>2200</memory>
	I0930 20:00:02.511338   26315 main.go:141] libmachine: (ha-805293-m02)   <vcpu>2</vcpu>
	I0930 20:00:02.511348   26315 main.go:141] libmachine: (ha-805293-m02)   <features>
	I0930 20:00:02.511357   26315 main.go:141] libmachine: (ha-805293-m02)     <acpi/>
	I0930 20:00:02.511364   26315 main.go:141] libmachine: (ha-805293-m02)     <apic/>
	I0930 20:00:02.511371   26315 main.go:141] libmachine: (ha-805293-m02)     <pae/>
	I0930 20:00:02.511377   26315 main.go:141] libmachine: (ha-805293-m02)     
	I0930 20:00:02.511388   26315 main.go:141] libmachine: (ha-805293-m02)   </features>
	I0930 20:00:02.511395   26315 main.go:141] libmachine: (ha-805293-m02)   <cpu mode='host-passthrough'>
	I0930 20:00:02.511405   26315 main.go:141] libmachine: (ha-805293-m02)   
	I0930 20:00:02.511416   26315 main.go:141] libmachine: (ha-805293-m02)   </cpu>
	I0930 20:00:02.511444   26315 main.go:141] libmachine: (ha-805293-m02)   <os>
	I0930 20:00:02.511468   26315 main.go:141] libmachine: (ha-805293-m02)     <type>hvm</type>
	I0930 20:00:02.511481   26315 main.go:141] libmachine: (ha-805293-m02)     <boot dev='cdrom'/>
	I0930 20:00:02.511494   26315 main.go:141] libmachine: (ha-805293-m02)     <boot dev='hd'/>
	I0930 20:00:02.511505   26315 main.go:141] libmachine: (ha-805293-m02)     <bootmenu enable='no'/>
	I0930 20:00:02.511512   26315 main.go:141] libmachine: (ha-805293-m02)   </os>
	I0930 20:00:02.511517   26315 main.go:141] libmachine: (ha-805293-m02)   <devices>
	I0930 20:00:02.511535   26315 main.go:141] libmachine: (ha-805293-m02)     <disk type='file' device='cdrom'>
	I0930 20:00:02.511552   26315 main.go:141] libmachine: (ha-805293-m02)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/boot2docker.iso'/>
	I0930 20:00:02.511561   26315 main.go:141] libmachine: (ha-805293-m02)       <target dev='hdc' bus='scsi'/>
	I0930 20:00:02.511591   26315 main.go:141] libmachine: (ha-805293-m02)       <readonly/>
	I0930 20:00:02.511613   26315 main.go:141] libmachine: (ha-805293-m02)     </disk>
	I0930 20:00:02.511630   26315 main.go:141] libmachine: (ha-805293-m02)     <disk type='file' device='disk'>
	I0930 20:00:02.511644   26315 main.go:141] libmachine: (ha-805293-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 20:00:02.511661   26315 main.go:141] libmachine: (ha-805293-m02)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/ha-805293-m02.rawdisk'/>
	I0930 20:00:02.511673   26315 main.go:141] libmachine: (ha-805293-m02)       <target dev='hda' bus='virtio'/>
	I0930 20:00:02.511692   26315 main.go:141] libmachine: (ha-805293-m02)     </disk>
	I0930 20:00:02.511711   26315 main.go:141] libmachine: (ha-805293-m02)     <interface type='network'>
	I0930 20:00:02.511729   26315 main.go:141] libmachine: (ha-805293-m02)       <source network='mk-ha-805293'/>
	I0930 20:00:02.511746   26315 main.go:141] libmachine: (ha-805293-m02)       <model type='virtio'/>
	I0930 20:00:02.511758   26315 main.go:141] libmachine: (ha-805293-m02)     </interface>
	I0930 20:00:02.511769   26315 main.go:141] libmachine: (ha-805293-m02)     <interface type='network'>
	I0930 20:00:02.511784   26315 main.go:141] libmachine: (ha-805293-m02)       <source network='default'/>
	I0930 20:00:02.511795   26315 main.go:141] libmachine: (ha-805293-m02)       <model type='virtio'/>
	I0930 20:00:02.511824   26315 main.go:141] libmachine: (ha-805293-m02)     </interface>
	I0930 20:00:02.511843   26315 main.go:141] libmachine: (ha-805293-m02)     <serial type='pty'>
	I0930 20:00:02.511853   26315 main.go:141] libmachine: (ha-805293-m02)       <target port='0'/>
	I0930 20:00:02.511862   26315 main.go:141] libmachine: (ha-805293-m02)     </serial>
	I0930 20:00:02.511870   26315 main.go:141] libmachine: (ha-805293-m02)     <console type='pty'>
	I0930 20:00:02.511881   26315 main.go:141] libmachine: (ha-805293-m02)       <target type='serial' port='0'/>
	I0930 20:00:02.511892   26315 main.go:141] libmachine: (ha-805293-m02)     </console>
	I0930 20:00:02.511901   26315 main.go:141] libmachine: (ha-805293-m02)     <rng model='virtio'>
	I0930 20:00:02.511910   26315 main.go:141] libmachine: (ha-805293-m02)       <backend model='random'>/dev/random</backend>
	I0930 20:00:02.511924   26315 main.go:141] libmachine: (ha-805293-m02)     </rng>
	I0930 20:00:02.511933   26315 main.go:141] libmachine: (ha-805293-m02)     
	I0930 20:00:02.511939   26315 main.go:141] libmachine: (ha-805293-m02)     
	I0930 20:00:02.511949   26315 main.go:141] libmachine: (ha-805293-m02)   </devices>
	I0930 20:00:02.511958   26315 main.go:141] libmachine: (ha-805293-m02) </domain>
	I0930 20:00:02.511969   26315 main.go:141] libmachine: (ha-805293-m02) 
	I0930 20:00:02.519423   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:35:68:69 in network default
	I0930 20:00:02.520096   26315 main.go:141] libmachine: (ha-805293-m02) Ensuring networks are active...
	I0930 20:00:02.520113   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:02.521080   26315 main.go:141] libmachine: (ha-805293-m02) Ensuring network default is active
	I0930 20:00:02.521471   26315 main.go:141] libmachine: (ha-805293-m02) Ensuring network mk-ha-805293 is active
	I0930 20:00:02.521811   26315 main.go:141] libmachine: (ha-805293-m02) Getting domain xml...
	I0930 20:00:02.522473   26315 main.go:141] libmachine: (ha-805293-m02) Creating domain...
	I0930 20:00:03.765540   26315 main.go:141] libmachine: (ha-805293-m02) Waiting to get IP...
	I0930 20:00:03.766353   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:03.766729   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:03.766750   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:03.766699   26664 retry.go:31] will retry after 241.920356ms: waiting for machine to come up
	I0930 20:00:04.010129   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:04.010801   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:04.010826   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:04.010761   26664 retry.go:31] will retry after 344.430245ms: waiting for machine to come up
	I0930 20:00:04.356311   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:04.356795   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:04.356815   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:04.356767   26664 retry.go:31] will retry after 377.488147ms: waiting for machine to come up
	I0930 20:00:04.736359   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:04.736817   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:04.736839   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:04.736768   26664 retry.go:31] will retry after 400.421105ms: waiting for machine to come up
	I0930 20:00:05.138514   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:05.139019   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:05.139050   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:05.138967   26664 retry.go:31] will retry after 547.144087ms: waiting for machine to come up
	I0930 20:00:05.688116   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:05.688838   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:05.688865   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:05.688769   26664 retry.go:31] will retry after 610.482897ms: waiting for machine to come up
	I0930 20:00:06.301403   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:06.301917   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:06.301945   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:06.301866   26664 retry.go:31] will retry after 792.553977ms: waiting for machine to come up
	I0930 20:00:07.096834   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:07.097300   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:07.097331   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:07.097234   26664 retry.go:31] will retry after 1.20008256s: waiting for machine to come up
	I0930 20:00:08.299714   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:08.300169   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:08.300191   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:08.300137   26664 retry.go:31] will retry after 1.678792143s: waiting for machine to come up
	I0930 20:00:09.980216   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:09.980657   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:09.980685   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:09.980618   26664 retry.go:31] will retry after 2.098959289s: waiting for machine to come up
	I0930 20:00:12.080886   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:12.081433   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:12.081474   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:12.081377   26664 retry.go:31] will retry after 2.748866897s: waiting for machine to come up
	I0930 20:00:14.833188   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:14.833722   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:14.833748   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:14.833682   26664 retry.go:31] will retry after 2.379918836s: waiting for machine to come up
	I0930 20:00:17.215678   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:17.216060   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find current IP address of domain ha-805293-m02 in network mk-ha-805293
	I0930 20:00:17.216093   26315 main.go:141] libmachine: (ha-805293-m02) DBG | I0930 20:00:17.215999   26664 retry.go:31] will retry after 4.355514313s: waiting for machine to come up
	I0930 20:00:21.576523   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.577032   26315 main.go:141] libmachine: (ha-805293-m02) Found IP for machine: 192.168.39.220
	I0930 20:00:21.577053   26315 main.go:141] libmachine: (ha-805293-m02) Reserving static IP address...
	I0930 20:00:21.577065   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has current primary IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.577388   26315 main.go:141] libmachine: (ha-805293-m02) DBG | unable to find host DHCP lease matching {name: "ha-805293-m02", mac: "52:54:00:fe:f4:56", ip: "192.168.39.220"} in network mk-ha-805293
	I0930 20:00:21.655408   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Getting to WaitForSSH function...
	I0930 20:00:21.655444   26315 main.go:141] libmachine: (ha-805293-m02) Reserved static IP address: 192.168.39.220
	I0930 20:00:21.655509   26315 main.go:141] libmachine: (ha-805293-m02) Waiting for SSH to be available...
	I0930 20:00:21.658005   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.658453   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:21.658491   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.658732   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Using SSH client type: external
	I0930 20:00:21.658759   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa (-rw-------)
	I0930 20:00:21.658792   26315 main.go:141] libmachine: (ha-805293-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 20:00:21.658808   26315 main.go:141] libmachine: (ha-805293-m02) DBG | About to run SSH command:
	I0930 20:00:21.658825   26315 main.go:141] libmachine: (ha-805293-m02) DBG | exit 0
	I0930 20:00:21.787681   26315 main.go:141] libmachine: (ha-805293-m02) DBG | SSH cmd err, output: <nil>: 
	I0930 20:00:21.788011   26315 main.go:141] libmachine: (ha-805293-m02) KVM machine creation complete!
	I0930 20:00:21.788252   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetConfigRaw
	I0930 20:00:21.788786   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:21.788970   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:21.789203   26315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 20:00:21.789220   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetState
	I0930 20:00:21.790562   26315 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 20:00:21.790578   26315 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 20:00:21.790584   26315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 20:00:21.790592   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:21.792832   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.793247   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:21.793275   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.793444   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:21.793624   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.793794   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.793936   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:21.794099   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:21.794370   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:21.794384   26315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 20:00:21.906923   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:00:21.906949   26315 main.go:141] libmachine: Detecting the provisioner...
	I0930 20:00:21.906961   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:21.910153   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.910565   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:21.910596   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:21.910764   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:21.910979   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.911241   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:21.911375   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:21.911534   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:21.911713   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:21.911726   26315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 20:00:22.024080   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 20:00:22.024153   26315 main.go:141] libmachine: found compatible host: buildroot
	I0930 20:00:22.024160   26315 main.go:141] libmachine: Provisioning with buildroot...
	I0930 20:00:22.024170   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:22.024471   26315 buildroot.go:166] provisioning hostname "ha-805293-m02"
	I0930 20:00:22.024504   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:22.024708   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.027328   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.027816   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.027846   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.028043   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.028244   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.028415   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.028559   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.028711   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.028924   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.028951   26315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293-m02 && echo "ha-805293-m02" | sudo tee /etc/hostname
	I0930 20:00:22.153517   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293-m02
	
	I0930 20:00:22.153558   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.156342   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.156867   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.156892   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.157066   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.157250   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.157398   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.157520   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.157658   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.157834   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.157856   26315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:00:22.280453   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:00:22.280490   26315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:00:22.280513   26315 buildroot.go:174] setting up certificates
	I0930 20:00:22.280524   26315 provision.go:84] configureAuth start
	I0930 20:00:22.280537   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetMachineName
	I0930 20:00:22.280873   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:22.283731   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.284096   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.284121   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.284311   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.286698   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.287078   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.287108   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.287262   26315 provision.go:143] copyHostCerts
	I0930 20:00:22.287296   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:00:22.287337   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:00:22.287351   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:00:22.287424   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:00:22.287503   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:00:22.287521   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:00:22.287557   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:00:22.287594   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:00:22.287648   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:00:22.287664   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:00:22.287668   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:00:22.287689   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:00:22.287737   26315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293-m02 san=[127.0.0.1 192.168.39.220 ha-805293-m02 localhost minikube]
	I0930 20:00:22.355076   26315 provision.go:177] copyRemoteCerts
	I0930 20:00:22.355131   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:00:22.355153   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.357993   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.358290   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.358317   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.358695   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.358872   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.358992   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.359090   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:22.445399   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 20:00:22.445470   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:00:22.469429   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 20:00:22.469516   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 20:00:22.492675   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 20:00:22.492763   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 20:00:22.515601   26315 provision.go:87] duration metric: took 235.062596ms to configureAuth
	I0930 20:00:22.515633   26315 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:00:22.515833   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:22.515926   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.518627   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.519062   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.519101   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.519248   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.519447   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.519617   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.519768   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.519918   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.520077   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.520090   26315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:00:22.744066   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:00:22.744092   26315 main.go:141] libmachine: Checking connection to Docker...
	I0930 20:00:22.744101   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetURL
	I0930 20:00:22.745446   26315 main.go:141] libmachine: (ha-805293-m02) DBG | Using libvirt version 6000000
	I0930 20:00:22.747635   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.748132   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.748161   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.748303   26315 main.go:141] libmachine: Docker is up and running!
	I0930 20:00:22.748319   26315 main.go:141] libmachine: Reticulating splines...
	I0930 20:00:22.748327   26315 client.go:171] duration metric: took 20.804148382s to LocalClient.Create
	I0930 20:00:22.748348   26315 start.go:167] duration metric: took 20.804213197s to libmachine.API.Create "ha-805293"
	I0930 20:00:22.748357   26315 start.go:293] postStartSetup for "ha-805293-m02" (driver="kvm2")
	I0930 20:00:22.748367   26315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:00:22.748386   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:22.748624   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:00:22.748654   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.750830   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.751166   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.751190   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.751299   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.751468   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.751612   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.751720   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:22.837496   26315 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:00:22.841510   26315 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:00:22.841546   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:00:22.841623   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:00:22.841717   26315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:00:22.841730   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 20:00:22.841843   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:00:22.851144   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:00:22.877058   26315 start.go:296] duration metric: took 128.687557ms for postStartSetup
	I0930 20:00:22.877104   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetConfigRaw
	I0930 20:00:22.877761   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:22.880570   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.880908   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.880931   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.881333   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:00:22.881547   26315 start.go:128] duration metric: took 20.956931205s to createHost
	I0930 20:00:22.881569   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:22.883882   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.884228   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:22.884246   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:22.884419   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:22.884601   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.884779   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:22.884913   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:22.885087   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:00:22.885252   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0930 20:00:22.885264   26315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:00:23.000299   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727726422.960119850
	
	I0930 20:00:23.000326   26315 fix.go:216] guest clock: 1727726422.960119850
	I0930 20:00:23.000338   26315 fix.go:229] Guest: 2024-09-30 20:00:22.96011985 +0000 UTC Remote: 2024-09-30 20:00:22.881558413 +0000 UTC m=+66.452648359 (delta=78.561437ms)
	I0930 20:00:23.000357   26315 fix.go:200] guest clock delta is within tolerance: 78.561437ms
	I0930 20:00:23.000364   26315 start.go:83] releasing machines lock for "ha-805293-m02", held for 21.075876017s
	I0930 20:00:23.000382   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.000682   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:23.003439   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.003855   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:23.003882   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.006309   26315 out.go:177] * Found network options:
	I0930 20:00:23.008016   26315 out.go:177]   - NO_PROXY=192.168.39.3
	W0930 20:00:23.009484   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:00:23.009519   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.010257   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.010450   26315 main.go:141] libmachine: (ha-805293-m02) Calling .DriverName
	I0930 20:00:23.010558   26315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:00:23.010606   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	W0930 20:00:23.010646   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:00:23.010724   26315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:00:23.010747   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHHostname
	I0930 20:00:23.013581   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.013752   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.013960   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:23.013983   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.014161   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:23.014186   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:23.014187   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:23.014404   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:23.014410   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHPort
	I0930 20:00:23.014563   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:23.014595   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHKeyPath
	I0930 20:00:23.014659   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:23.014695   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetSSHUsername
	I0930 20:00:23.014791   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m02/id_rsa Username:docker}
	I0930 20:00:23.259199   26315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:00:23.264710   26315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:00:23.264772   26315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:00:23.281650   26315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 20:00:23.281678   26315 start.go:495] detecting cgroup driver to use...
	I0930 20:00:23.281745   26315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:00:23.300954   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:00:23.318197   26315 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:00:23.318266   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:00:23.334729   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:00:23.351325   26315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:00:23.494840   26315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:00:23.659365   26315 docker.go:233] disabling docker service ...
	I0930 20:00:23.659442   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:00:23.673200   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:00:23.686244   26315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:00:23.816616   26315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:00:23.949421   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:00:23.963035   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:00:23.981793   26315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:00:23.981869   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:23.992506   26315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:00:23.992572   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.003215   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.013791   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.024890   26315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:00:24.036504   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.046845   26315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.063744   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:00:24.074710   26315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:00:24.084399   26315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 20:00:24.084456   26315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 20:00:24.097779   26315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:00:24.107679   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:00:24.245414   26315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:00:24.332691   26315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:00:24.332763   26315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:00:24.337609   26315 start.go:563] Will wait 60s for crictl version
	I0930 20:00:24.337672   26315 ssh_runner.go:195] Run: which crictl
	I0930 20:00:24.341369   26315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:00:24.379294   26315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:00:24.379384   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:00:24.407964   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:00:24.438040   26315 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:00:24.439799   26315 out.go:177]   - env NO_PROXY=192.168.39.3
	I0930 20:00:24.441127   26315 main.go:141] libmachine: (ha-805293-m02) Calling .GetIP
	I0930 20:00:24.443641   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:24.443999   26315 main.go:141] libmachine: (ha-805293-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:f4:56", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:00:16 +0000 UTC Type:0 Mac:52:54:00:fe:f4:56 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-805293-m02 Clientid:01:52:54:00:fe:f4:56}
	I0930 20:00:24.444023   26315 main.go:141] libmachine: (ha-805293-m02) DBG | domain ha-805293-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:fe:f4:56 in network mk-ha-805293
	I0930 20:00:24.444256   26315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:00:24.448441   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:00:24.460479   26315 mustload.go:65] Loading cluster: ha-805293
	I0930 20:00:24.460673   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:24.460911   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:24.460946   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:24.475845   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0930 20:00:24.476505   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:24.476991   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:24.477013   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:24.477336   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:24.477545   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:00:24.479156   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:24.479566   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:24.479614   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:24.494163   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I0930 20:00:24.494690   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:24.495134   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:24.495156   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:24.495462   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:24.495672   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:24.495840   26315 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.220
	I0930 20:00:24.495854   26315 certs.go:194] generating shared ca certs ...
	I0930 20:00:24.495872   26315 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:24.495990   26315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:00:24.496030   26315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:00:24.496038   26315 certs.go:256] generating profile certs ...
	I0930 20:00:24.496099   26315 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 20:00:24.496121   26315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032
	I0930 20:00:24.496134   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.220 192.168.39.254]
	I0930 20:00:24.563341   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032 ...
	I0930 20:00:24.563370   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032: {Name:mk8534a0b1f65471035122400012ca9f075cb68b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:24.563553   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032 ...
	I0930 20:00:24.563580   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032: {Name:mkdff9b5cf02688bad7cef701430e9d45f427c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:00:24.563669   26315 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.25883032 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 20:00:24.563804   26315 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.25883032 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 20:00:24.563922   26315 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 20:00:24.563935   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 20:00:24.563949   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 20:00:24.563961   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 20:00:24.563971   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 20:00:24.563981   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 20:00:24.563992   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 20:00:24.564001   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 20:00:24.564012   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 20:00:24.564058   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:00:24.564087   26315 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:00:24.564096   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:00:24.564116   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:00:24.564137   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:00:24.564157   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:00:24.564196   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:00:24.564221   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 20:00:24.564233   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 20:00:24.564246   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:24.564276   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:24.567674   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:24.568209   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:24.568244   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:24.568458   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:24.568679   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:24.568859   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:24.569017   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:24.647988   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 20:00:24.652578   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 20:00:24.663570   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 20:00:24.667502   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 20:00:24.678300   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 20:00:24.682636   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 20:00:24.692556   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 20:00:24.697407   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0930 20:00:24.708600   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 20:00:24.716272   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 20:00:24.726239   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 20:00:24.730151   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0930 20:00:24.740007   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:00:24.764135   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:00:24.787511   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:00:24.811921   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:00:24.835050   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0930 20:00:24.858111   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 20:00:24.881164   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:00:24.905084   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 20:00:24.930204   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:00:24.954976   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:00:24.979893   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:00:25.004028   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 20:00:25.020509   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 20:00:25.037112   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 20:00:25.053614   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0930 20:00:25.069699   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 20:00:25.087062   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0930 20:00:25.103141   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 20:00:25.119089   26315 ssh_runner.go:195] Run: openssl version
	I0930 20:00:25.124587   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:00:25.135122   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:00:25.139645   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:00:25.139709   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:00:25.145556   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:00:25.156636   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:00:25.167339   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:00:25.171719   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:00:25.171780   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:00:25.177212   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:00:25.188055   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:00:25.199114   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:25.203444   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:25.203514   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:00:25.209227   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:00:25.220164   26315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:00:25.224532   26315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 20:00:25.224591   26315 kubeadm.go:934] updating node {m02 192.168.39.220 8443 v1.31.1 crio true true} ...
	I0930 20:00:25.224694   26315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:00:25.224719   26315 kube-vip.go:115] generating kube-vip config ...
	I0930 20:00:25.224757   26315 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 20:00:25.242207   26315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 20:00:25.242306   26315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 20:00:25.242370   26315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:00:25.253224   26315 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 20:00:25.253326   26315 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 20:00:25.264511   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 20:00:25.264547   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:00:25.264590   26315 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0930 20:00:25.264606   26315 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0930 20:00:25.264613   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:00:25.269385   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 20:00:25.269423   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 20:00:26.288255   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:00:26.288359   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:00:26.293355   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 20:00:26.293391   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 20:00:26.370842   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:00:26.408125   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:00:26.408233   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:00:26.414764   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 20:00:26.414804   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 20:00:26.848584   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 20:00:26.858015   26315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 20:00:26.874053   26315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:00:26.890616   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 20:00:26.906680   26315 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 20:00:26.910431   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:00:26.921656   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:00:27.039123   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:00:27.056773   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:00:27.057124   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:00:27.057173   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:00:27.072237   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I0930 20:00:27.072852   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:00:27.073292   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:00:27.073321   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:00:27.073651   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:00:27.073859   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:00:27.073989   26315 start.go:317] joinCluster: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:00:27.074091   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 20:00:27.074108   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:00:27.076745   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:27.077111   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:00:27.077130   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:00:27.077207   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:00:27.077370   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:00:27.077633   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:00:27.077784   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:00:27.230308   26315 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:27.230355   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cnuzai.6xkseww2aia5hxhb --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m02 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443"
	I0930 20:00:50.312960   26315 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cnuzai.6xkseww2aia5hxhb --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m02 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443": (23.082567099s)
	I0930 20:00:50.313004   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 20:00:50.837990   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-805293-m02 minikube.k8s.io/updated_at=2024_09_30T20_00_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=ha-805293 minikube.k8s.io/primary=false
	I0930 20:00:50.975697   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-805293-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 20:00:51.102316   26315 start.go:319] duration metric: took 24.028319202s to joinCluster
	I0930 20:00:51.102444   26315 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:00:51.102695   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:00:51.104462   26315 out.go:177] * Verifying Kubernetes components...
	I0930 20:00:51.105980   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:00:51.368169   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:00:51.414670   26315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:00:51.415012   26315 kapi.go:59] client config for ha-805293: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 20:00:51.415098   26315 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.3:8443
	I0930 20:00:51.415444   26315 node_ready.go:35] waiting up to 6m0s for node "ha-805293-m02" to be "Ready" ...
	I0930 20:00:51.415604   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:51.415616   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:51.415627   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:51.415634   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:51.426106   26315 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 20:00:51.915725   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:51.915750   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:51.915764   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:51.915771   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:51.920139   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:52.416072   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:52.416092   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:52.416100   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:52.416104   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:52.419738   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:52.915687   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:52.915720   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:52.915733   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:52.915739   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:52.920070   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:53.415992   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:53.416013   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:53.416021   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:53.416027   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:53.419709   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:53.420257   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:00:53.915641   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:53.915662   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:53.915670   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:53.915675   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:53.918936   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:54.415947   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:54.415969   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:54.415978   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:54.415983   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:54.419470   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:54.916559   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:54.916594   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:54.916604   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:54.916609   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:54.920769   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:55.415723   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:55.415749   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:55.415760   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:55.415767   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:55.419960   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:55.420655   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:00:55.915703   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:55.915725   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:55.915732   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:55.915737   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:55.918792   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:56.415726   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:56.415759   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:56.415768   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:56.415771   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:56.419845   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:56.915720   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:56.915749   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:56.915761   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:56.915768   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:56.919114   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:57.415890   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:57.415920   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:57.415930   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:57.415936   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:57.419326   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:57.916001   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:57.916024   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:57.916032   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:57.916036   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:57.919385   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:57.920066   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:00:58.416036   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:58.416058   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:58.416066   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:58.416071   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:58.444113   26315 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0930 20:00:58.915821   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:58.915851   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:58.915865   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:58.915872   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:58.919943   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:00:59.415861   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:59.415883   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:59.415892   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:59.415896   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:59.419554   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:59.916644   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:00:59.916665   26315 round_trippers.go:469] Request Headers:
	I0930 20:00:59.916673   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:00:59.916681   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:00:59.920228   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:00:59.920834   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:00.415729   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:00.415764   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:00.415772   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:00.415777   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:00.419232   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:00.915725   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:00.915748   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:00.915758   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:00.915764   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:00.920882   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:01:01.416215   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:01.416240   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:01.416249   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:01.416252   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:01.419889   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:01.916651   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:01.916673   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:01.916680   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:01.916686   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:01.920422   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:01.920906   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:02.416417   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:02.416447   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:02.416458   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:02.416465   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:02.420384   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:02.916614   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:02.916639   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:02.916647   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:02.916651   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:02.920435   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:03.416222   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:03.416246   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:03.416255   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:03.416258   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:03.419787   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:03.915698   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:03.915726   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:03.915735   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:03.915739   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:03.919427   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:04.415764   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:04.415788   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:04.415797   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:04.415801   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:04.419012   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:04.419574   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:04.915824   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:04.915846   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:04.915855   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:04.915859   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:04.920091   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:05.415756   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:05.415780   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:05.415787   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:05.415791   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:05.421271   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:01:05.915718   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:05.915739   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:05.915747   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:05.915751   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:05.919141   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:06.415741   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:06.415762   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:06.415770   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:06.415774   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:06.418886   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:06.419650   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:06.916104   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:06.916133   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:06.916144   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:06.916149   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:06.919406   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:07.416605   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:07.416630   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:07.416639   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:07.416646   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:07.419940   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:07.915753   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:07.915780   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:07.915790   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:07.915795   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:07.919449   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:08.416606   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:08.416630   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:08.416638   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:08.416643   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:08.420794   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:08.421339   26315 node_ready.go:53] node "ha-805293-m02" has status "Ready":"False"
	I0930 20:01:08.915715   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:08.915738   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:08.915746   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:08.915752   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:08.919389   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.416586   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:09.416611   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.416621   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.416628   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.419914   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.916640   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:09.916661   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.916669   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.916673   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.919743   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.920355   26315 node_ready.go:49] node "ha-805293-m02" has status "Ready":"True"
	I0930 20:01:09.920385   26315 node_ready.go:38] duration metric: took 18.504913608s for node "ha-805293-m02" to be "Ready" ...
	I0930 20:01:09.920395   26315 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:01:09.920461   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:09.920470   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.920477   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.920481   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.924944   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:09.930623   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.930723   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-x7zjp
	I0930 20:01:09.930731   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.930739   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.930743   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.933787   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:09.934467   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:09.934486   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.934497   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.934502   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.936935   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.937372   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.937389   26315 pod_ready.go:82] duration metric: took 6.738618ms for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.937399   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.937452   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-z4bkv
	I0930 20:01:09.937460   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.937467   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.937471   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.939718   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.940345   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:09.940360   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.940367   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.940372   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.942825   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.943347   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.943362   26315 pod_ready.go:82] duration metric: took 5.957941ms for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.943374   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.943449   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293
	I0930 20:01:09.943477   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.943493   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.943502   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.946145   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.946815   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:09.946829   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.946837   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.946841   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.949619   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.950200   26315 pod_ready.go:93] pod "etcd-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.950222   26315 pod_ready.go:82] duration metric: took 6.836708ms for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.950233   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.950305   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m02
	I0930 20:01:09.950326   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.950334   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.950340   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.953306   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.953792   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:09.953806   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:09.953813   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:09.953817   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:09.956400   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:01:09.956812   26315 pod_ready.go:93] pod "etcd-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:09.956829   26315 pod_ready.go:82] duration metric: took 6.588184ms for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:09.956845   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.117233   26315 request.go:632] Waited for 160.320722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:01:10.117300   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:01:10.117306   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.117318   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.117324   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.120940   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.317057   26315 request.go:632] Waited for 195.415809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:10.317127   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:10.317135   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.317156   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.317180   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.320648   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.321373   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:10.321392   26315 pod_ready.go:82] duration metric: took 364.537566ms for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.321402   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.517507   26315 request.go:632] Waited for 196.023112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:01:10.517576   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:01:10.517583   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.517594   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.517601   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.521299   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.717299   26315 request.go:632] Waited for 195.382491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:10.717366   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:10.717372   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.717379   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.717384   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.720883   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:10.721468   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:10.721488   26315 pod_ready.go:82] duration metric: took 400.07752ms for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.721497   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:10.917490   26315 request.go:632] Waited for 195.929177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:01:10.917554   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:01:10.917574   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:10.917606   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:10.917617   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:10.921610   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.116693   26315 request.go:632] Waited for 194.297174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.116753   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.116759   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.116766   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.116769   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.120537   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.121044   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:11.121062   26315 pod_ready.go:82] duration metric: took 399.55959ms for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.121074   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.317266   26315 request.go:632] Waited for 196.133826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:01:11.317335   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:01:11.317342   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.317351   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.317358   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.321265   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.517020   26315 request.go:632] Waited for 195.154322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:11.517082   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:11.517089   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.517098   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.517103   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.520779   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.521296   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:11.521319   26315 pod_ready.go:82] duration metric: took 400.238082ms for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.521335   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.716800   26315 request.go:632] Waited for 195.390285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:01:11.716888   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:01:11.716896   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.716906   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.716911   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.720246   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.917422   26315 request.go:632] Waited for 196.372605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.917500   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:11.917508   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:11.917518   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:11.917526   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:11.921353   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:11.921887   26315 pod_ready.go:93] pod "kube-proxy-6gnt4" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:11.921912   26315 pod_ready.go:82] duration metric: took 400.568991ms for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:11.921925   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.116927   26315 request.go:632] Waited for 194.932043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:01:12.117009   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:01:12.117015   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.117022   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.117026   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.121372   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:12.317480   26315 request.go:632] Waited for 195.395103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:12.317541   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:12.317546   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.317553   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.317556   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.321223   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:12.321777   26315 pod_ready.go:93] pod "kube-proxy-vptrg" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:12.321796   26315 pod_ready.go:82] duration metric: took 399.864157ms for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.321806   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.516927   26315 request.go:632] Waited for 195.058252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:01:12.517009   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:01:12.517015   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.517022   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.517029   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.520681   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:12.717635   26315 request.go:632] Waited for 196.390201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:12.717694   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:01:12.717698   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.717706   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.717714   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.721311   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:12.721886   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:12.721903   26315 pod_ready.go:82] duration metric: took 400.091381ms for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.721913   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:12.917094   26315 request.go:632] Waited for 195.106579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:01:12.917184   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:01:12.917193   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:12.917203   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:12.917212   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:12.921090   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:13.117142   26315 request.go:632] Waited for 195.345819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:13.117216   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:01:13.117221   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.117229   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.117232   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.120777   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:13.121215   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:01:13.121232   26315 pod_ready.go:82] duration metric: took 399.313081ms for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:01:13.121242   26315 pod_ready.go:39] duration metric: took 3.200834368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:01:13.121266   26315 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:01:13.121324   26315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:01:13.137767   26315 api_server.go:72] duration metric: took 22.035280113s to wait for apiserver process to appear ...
	I0930 20:01:13.137797   26315 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:01:13.137828   26315 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0930 20:01:13.141994   26315 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I0930 20:01:13.142067   26315 round_trippers.go:463] GET https://192.168.39.3:8443/version
	I0930 20:01:13.142074   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.142082   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.142090   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.142859   26315 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 20:01:13.142975   26315 api_server.go:141] control plane version: v1.31.1
	I0930 20:01:13.142993   26315 api_server.go:131] duration metric: took 5.190596ms to wait for apiserver health ...
	I0930 20:01:13.143001   26315 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:01:13.317422   26315 request.go:632] Waited for 174.359049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.317472   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.317478   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.317484   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.317488   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.321962   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:13.326370   26315 system_pods.go:59] 17 kube-system pods found
	I0930 20:01:13.326406   26315 system_pods.go:61] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:01:13.326411   26315 system_pods.go:61] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:01:13.326415   26315 system_pods.go:61] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:01:13.326420   26315 system_pods.go:61] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:01:13.326425   26315 system_pods.go:61] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:01:13.326429   26315 system_pods.go:61] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:01:13.326432   26315 system_pods.go:61] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:01:13.326435   26315 system_pods.go:61] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:01:13.326438   26315 system_pods.go:61] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:01:13.326442   26315 system_pods.go:61] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:01:13.326445   26315 system_pods.go:61] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:01:13.326448   26315 system_pods.go:61] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:01:13.326451   26315 system_pods.go:61] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:01:13.326454   26315 system_pods.go:61] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:01:13.326457   26315 system_pods.go:61] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:01:13.326459   26315 system_pods.go:61] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:01:13.326462   26315 system_pods.go:61] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:01:13.326467   26315 system_pods.go:74] duration metric: took 183.46129ms to wait for pod list to return data ...
	I0930 20:01:13.326477   26315 default_sa.go:34] waiting for default service account to be created ...
	I0930 20:01:13.516843   26315 request.go:632] Waited for 190.295336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:01:13.516914   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:01:13.516919   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.516926   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.516929   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.520919   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:01:13.521167   26315 default_sa.go:45] found service account: "default"
	I0930 20:01:13.521184   26315 default_sa.go:55] duration metric: took 194.701824ms for default service account to be created ...
	I0930 20:01:13.521193   26315 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 20:01:13.717380   26315 request.go:632] Waited for 196.119354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.717451   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:01:13.717458   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.717467   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.717471   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.722690   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:01:13.727139   26315 system_pods.go:86] 17 kube-system pods found
	I0930 20:01:13.727168   26315 system_pods.go:89] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:01:13.727174   26315 system_pods.go:89] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:01:13.727179   26315 system_pods.go:89] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:01:13.727184   26315 system_pods.go:89] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:01:13.727188   26315 system_pods.go:89] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:01:13.727193   26315 system_pods.go:89] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:01:13.727198   26315 system_pods.go:89] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:01:13.727204   26315 system_pods.go:89] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:01:13.727209   26315 system_pods.go:89] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:01:13.727217   26315 system_pods.go:89] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:01:13.727230   26315 system_pods.go:89] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:01:13.727235   26315 system_pods.go:89] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:01:13.727241   26315 system_pods.go:89] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:01:13.727247   26315 system_pods.go:89] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:01:13.727252   26315 system_pods.go:89] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:01:13.727257   26315 system_pods.go:89] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:01:13.727261   26315 system_pods.go:89] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:01:13.727270   26315 system_pods.go:126] duration metric: took 206.072644ms to wait for k8s-apps to be running ...
	I0930 20:01:13.727277   26315 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 20:01:13.727327   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:01:13.741981   26315 system_svc.go:56] duration metric: took 14.693769ms WaitForService to wait for kubelet
	I0930 20:01:13.742010   26315 kubeadm.go:582] duration metric: took 22.639532003s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:01:13.742027   26315 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:01:13.917345   26315 request.go:632] Waited for 175.232926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes
	I0930 20:01:13.917397   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I0930 20:01:13.917402   26315 round_trippers.go:469] Request Headers:
	I0930 20:01:13.917410   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:01:13.917413   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:01:13.921853   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:01:13.922642   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:01:13.922674   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:01:13.922690   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:01:13.922694   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:01:13.922699   26315 node_conditions.go:105] duration metric: took 180.667513ms to run NodePressure ...
	I0930 20:01:13.922708   26315 start.go:241] waiting for startup goroutines ...
	I0930 20:01:13.922733   26315 start.go:255] writing updated cluster config ...
	I0930 20:01:13.925048   26315 out.go:201] 
	I0930 20:01:13.926843   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:01:13.926954   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:01:13.928893   26315 out.go:177] * Starting "ha-805293-m03" control-plane node in "ha-805293" cluster
	I0930 20:01:13.930308   26315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:01:13.930336   26315 cache.go:56] Caching tarball of preloaded images
	I0930 20:01:13.930467   26315 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:01:13.930485   26315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:01:13.930582   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:01:13.930765   26315 start.go:360] acquireMachinesLock for ha-805293-m03: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:01:13.930817   26315 start.go:364] duration metric: took 28.082µs to acquireMachinesLock for "ha-805293-m03"
	I0930 20:01:13.930836   26315 start.go:93] Provisioning new machine with config: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:01:13.930923   26315 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0930 20:01:13.932766   26315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 20:01:13.932890   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:13.932929   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:13.949248   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
	I0930 20:01:13.949763   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:13.950280   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:13.950304   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:13.950634   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:13.950970   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:13.951189   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:13.951448   26315 start.go:159] libmachine.API.Create for "ha-805293" (driver="kvm2")
	I0930 20:01:13.951489   26315 client.go:168] LocalClient.Create starting
	I0930 20:01:13.951565   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 20:01:13.951611   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:01:13.951631   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:01:13.951696   26315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 20:01:13.951724   26315 main.go:141] libmachine: Decoding PEM data...
	I0930 20:01:13.951742   26315 main.go:141] libmachine: Parsing certificate...
	I0930 20:01:13.951770   26315 main.go:141] libmachine: Running pre-create checks...
	I0930 20:01:13.951780   26315 main.go:141] libmachine: (ha-805293-m03) Calling .PreCreateCheck
	I0930 20:01:13.951958   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetConfigRaw
	I0930 20:01:13.952389   26315 main.go:141] libmachine: Creating machine...
	I0930 20:01:13.952404   26315 main.go:141] libmachine: (ha-805293-m03) Calling .Create
	I0930 20:01:13.952539   26315 main.go:141] libmachine: (ha-805293-m03) Creating KVM machine...
	I0930 20:01:13.953896   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found existing default KVM network
	I0930 20:01:13.954082   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found existing private KVM network mk-ha-805293
	I0930 20:01:13.954276   26315 main.go:141] libmachine: (ha-805293-m03) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03 ...
	I0930 20:01:13.954303   26315 main.go:141] libmachine: (ha-805293-m03) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 20:01:13.954425   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:13.954267   27054 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:01:13.954521   26315 main.go:141] libmachine: (ha-805293-m03) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 20:01:14.186819   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:14.186689   27054 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa...
	I0930 20:01:14.467265   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:14.467127   27054 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/ha-805293-m03.rawdisk...
	I0930 20:01:14.467311   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Writing magic tar header
	I0930 20:01:14.467327   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Writing SSH key tar header
	I0930 20:01:14.467340   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:14.467280   27054 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03 ...
	I0930 20:01:14.467434   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03
	I0930 20:01:14.467495   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03 (perms=drwx------)
	I0930 20:01:14.467509   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 20:01:14.467520   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:01:14.467545   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 20:01:14.467563   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 20:01:14.467577   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 20:01:14.467590   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 20:01:14.467603   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home/jenkins
	I0930 20:01:14.467614   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Checking permissions on dir: /home
	I0930 20:01:14.467622   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Skipping /home - not owner
	I0930 20:01:14.467636   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 20:01:14.467659   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 20:01:14.467677   26315 main.go:141] libmachine: (ha-805293-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 20:01:14.467702   26315 main.go:141] libmachine: (ha-805293-m03) Creating domain...
	I0930 20:01:14.468847   26315 main.go:141] libmachine: (ha-805293-m03) define libvirt domain using xml: 
	I0930 20:01:14.468871   26315 main.go:141] libmachine: (ha-805293-m03) <domain type='kvm'>
	I0930 20:01:14.468881   26315 main.go:141] libmachine: (ha-805293-m03)   <name>ha-805293-m03</name>
	I0930 20:01:14.468899   26315 main.go:141] libmachine: (ha-805293-m03)   <memory unit='MiB'>2200</memory>
	I0930 20:01:14.468932   26315 main.go:141] libmachine: (ha-805293-m03)   <vcpu>2</vcpu>
	I0930 20:01:14.468950   26315 main.go:141] libmachine: (ha-805293-m03)   <features>
	I0930 20:01:14.468968   26315 main.go:141] libmachine: (ha-805293-m03)     <acpi/>
	I0930 20:01:14.468978   26315 main.go:141] libmachine: (ha-805293-m03)     <apic/>
	I0930 20:01:14.469001   26315 main.go:141] libmachine: (ha-805293-m03)     <pae/>
	I0930 20:01:14.469014   26315 main.go:141] libmachine: (ha-805293-m03)     
	I0930 20:01:14.469041   26315 main.go:141] libmachine: (ha-805293-m03)   </features>
	I0930 20:01:14.469062   26315 main.go:141] libmachine: (ha-805293-m03)   <cpu mode='host-passthrough'>
	I0930 20:01:14.469074   26315 main.go:141] libmachine: (ha-805293-m03)   
	I0930 20:01:14.469080   26315 main.go:141] libmachine: (ha-805293-m03)   </cpu>
	I0930 20:01:14.469091   26315 main.go:141] libmachine: (ha-805293-m03)   <os>
	I0930 20:01:14.469107   26315 main.go:141] libmachine: (ha-805293-m03)     <type>hvm</type>
	I0930 20:01:14.469115   26315 main.go:141] libmachine: (ha-805293-m03)     <boot dev='cdrom'/>
	I0930 20:01:14.469124   26315 main.go:141] libmachine: (ha-805293-m03)     <boot dev='hd'/>
	I0930 20:01:14.469143   26315 main.go:141] libmachine: (ha-805293-m03)     <bootmenu enable='no'/>
	I0930 20:01:14.469154   26315 main.go:141] libmachine: (ha-805293-m03)   </os>
	I0930 20:01:14.469164   26315 main.go:141] libmachine: (ha-805293-m03)   <devices>
	I0930 20:01:14.469248   26315 main.go:141] libmachine: (ha-805293-m03)     <disk type='file' device='cdrom'>
	I0930 20:01:14.469284   26315 main.go:141] libmachine: (ha-805293-m03)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/boot2docker.iso'/>
	I0930 20:01:14.469299   26315 main.go:141] libmachine: (ha-805293-m03)       <target dev='hdc' bus='scsi'/>
	I0930 20:01:14.469305   26315 main.go:141] libmachine: (ha-805293-m03)       <readonly/>
	I0930 20:01:14.469314   26315 main.go:141] libmachine: (ha-805293-m03)     </disk>
	I0930 20:01:14.469321   26315 main.go:141] libmachine: (ha-805293-m03)     <disk type='file' device='disk'>
	I0930 20:01:14.469350   26315 main.go:141] libmachine: (ha-805293-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 20:01:14.469366   26315 main.go:141] libmachine: (ha-805293-m03)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/ha-805293-m03.rawdisk'/>
	I0930 20:01:14.469381   26315 main.go:141] libmachine: (ha-805293-m03)       <target dev='hda' bus='virtio'/>
	I0930 20:01:14.469387   26315 main.go:141] libmachine: (ha-805293-m03)     </disk>
	I0930 20:01:14.469400   26315 main.go:141] libmachine: (ha-805293-m03)     <interface type='network'>
	I0930 20:01:14.469410   26315 main.go:141] libmachine: (ha-805293-m03)       <source network='mk-ha-805293'/>
	I0930 20:01:14.469421   26315 main.go:141] libmachine: (ha-805293-m03)       <model type='virtio'/>
	I0930 20:01:14.469427   26315 main.go:141] libmachine: (ha-805293-m03)     </interface>
	I0930 20:01:14.469437   26315 main.go:141] libmachine: (ha-805293-m03)     <interface type='network'>
	I0930 20:01:14.469456   26315 main.go:141] libmachine: (ha-805293-m03)       <source network='default'/>
	I0930 20:01:14.469482   26315 main.go:141] libmachine: (ha-805293-m03)       <model type='virtio'/>
	I0930 20:01:14.469512   26315 main.go:141] libmachine: (ha-805293-m03)     </interface>
	I0930 20:01:14.469521   26315 main.go:141] libmachine: (ha-805293-m03)     <serial type='pty'>
	I0930 20:01:14.469540   26315 main.go:141] libmachine: (ha-805293-m03)       <target port='0'/>
	I0930 20:01:14.469572   26315 main.go:141] libmachine: (ha-805293-m03)     </serial>
	I0930 20:01:14.469589   26315 main.go:141] libmachine: (ha-805293-m03)     <console type='pty'>
	I0930 20:01:14.469603   26315 main.go:141] libmachine: (ha-805293-m03)       <target type='serial' port='0'/>
	I0930 20:01:14.469614   26315 main.go:141] libmachine: (ha-805293-m03)     </console>
	I0930 20:01:14.469623   26315 main.go:141] libmachine: (ha-805293-m03)     <rng model='virtio'>
	I0930 20:01:14.469631   26315 main.go:141] libmachine: (ha-805293-m03)       <backend model='random'>/dev/random</backend>
	I0930 20:01:14.469642   26315 main.go:141] libmachine: (ha-805293-m03)     </rng>
	I0930 20:01:14.469648   26315 main.go:141] libmachine: (ha-805293-m03)     
	I0930 20:01:14.469658   26315 main.go:141] libmachine: (ha-805293-m03)     
	I0930 20:01:14.469664   26315 main.go:141] libmachine: (ha-805293-m03)   </devices>
	I0930 20:01:14.469672   26315 main.go:141] libmachine: (ha-805293-m03) </domain>
	I0930 20:01:14.469677   26315 main.go:141] libmachine: (ha-805293-m03) 
	I0930 20:01:14.476673   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:7e:5d:5f in network default
	I0930 20:01:14.477269   26315 main.go:141] libmachine: (ha-805293-m03) Ensuring networks are active...
	I0930 20:01:14.477295   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:14.478121   26315 main.go:141] libmachine: (ha-805293-m03) Ensuring network default is active
	I0930 20:01:14.478526   26315 main.go:141] libmachine: (ha-805293-m03) Ensuring network mk-ha-805293 is active
	I0930 20:01:14.478957   26315 main.go:141] libmachine: (ha-805293-m03) Getting domain xml...
	I0930 20:01:14.479718   26315 main.go:141] libmachine: (ha-805293-m03) Creating domain...
	I0930 20:01:15.747292   26315 main.go:141] libmachine: (ha-805293-m03) Waiting to get IP...
	I0930 20:01:15.748220   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:15.748679   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:15.748743   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:15.748666   27054 retry.go:31] will retry after 284.785124ms: waiting for machine to come up
	I0930 20:01:16.035256   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:16.035716   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:16.035831   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:16.035661   27054 retry.go:31] will retry after 335.488124ms: waiting for machine to come up
	I0930 20:01:16.373109   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:16.373683   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:16.373706   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:16.373645   27054 retry.go:31] will retry after 461.768045ms: waiting for machine to come up
	I0930 20:01:16.837400   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:16.837942   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:16.838002   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:16.837899   27054 retry.go:31] will retry after 451.939776ms: waiting for machine to come up
	I0930 20:01:17.291224   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:17.291638   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:17.291662   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:17.291600   27054 retry.go:31] will retry after 601.468058ms: waiting for machine to come up
	I0930 20:01:17.894045   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:17.894474   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:17.894502   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:17.894444   27054 retry.go:31] will retry after 685.014003ms: waiting for machine to come up
	I0930 20:01:18.581469   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:18.581905   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:18.581940   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:18.581886   27054 retry.go:31] will retry after 901.632295ms: waiting for machine to come up
	I0930 20:01:19.485606   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:19.486144   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:19.486174   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:19.486068   27054 retry.go:31] will retry after 1.002316049s: waiting for machine to come up
	I0930 20:01:20.489568   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:20.490064   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:20.490086   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:20.490017   27054 retry.go:31] will retry after 1.384559526s: waiting for machine to come up
	I0930 20:01:21.875542   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:21.875885   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:21.875904   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:21.875821   27054 retry.go:31] will retry after 1.560882287s: waiting for machine to come up
	I0930 20:01:23.438575   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:23.439019   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:23.439051   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:23.438971   27054 retry.go:31] will retry after 1.966635221s: waiting for machine to come up
	I0930 20:01:25.407626   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:25.408136   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:25.408170   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:25.408088   27054 retry.go:31] will retry after 2.861827785s: waiting for machine to come up
	I0930 20:01:28.272997   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:28.273395   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:28.273417   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:28.273357   27054 retry.go:31] will retry after 2.760760648s: waiting for machine to come up
	I0930 20:01:31.035244   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:31.035758   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find current IP address of domain ha-805293-m03 in network mk-ha-805293
	I0930 20:01:31.035806   26315 main.go:141] libmachine: (ha-805293-m03) DBG | I0930 20:01:31.035729   27054 retry.go:31] will retry after 3.889423891s: waiting for machine to come up
	I0930 20:01:34.927053   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:34.927650   26315 main.go:141] libmachine: (ha-805293-m03) Found IP for machine: 192.168.39.227
	I0930 20:01:34.927682   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has current primary IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:34.927690   26315 main.go:141] libmachine: (ha-805293-m03) Reserving static IP address...
	I0930 20:01:34.928071   26315 main.go:141] libmachine: (ha-805293-m03) DBG | unable to find host DHCP lease matching {name: "ha-805293-m03", mac: "52:54:00:ce:66:df", ip: "192.168.39.227"} in network mk-ha-805293
	I0930 20:01:35.005095   26315 main.go:141] libmachine: (ha-805293-m03) Reserved static IP address: 192.168.39.227
	I0930 20:01:35.005128   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Getting to WaitForSSH function...
	I0930 20:01:35.005135   26315 main.go:141] libmachine: (ha-805293-m03) Waiting for SSH to be available...
	I0930 20:01:35.007521   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.008053   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.008080   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.008244   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Using SSH client type: external
	I0930 20:01:35.008262   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa (-rw-------)
	I0930 20:01:35.008294   26315 main.go:141] libmachine: (ha-805293-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 20:01:35.008309   26315 main.go:141] libmachine: (ha-805293-m03) DBG | About to run SSH command:
	I0930 20:01:35.008328   26315 main.go:141] libmachine: (ha-805293-m03) DBG | exit 0
	I0930 20:01:35.131490   26315 main.go:141] libmachine: (ha-805293-m03) DBG | SSH cmd err, output: <nil>: 
	I0930 20:01:35.131786   26315 main.go:141] libmachine: (ha-805293-m03) KVM machine creation complete!
	I0930 20:01:35.132088   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetConfigRaw
	I0930 20:01:35.132882   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:35.133160   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:35.133330   26315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 20:01:35.133343   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetState
	I0930 20:01:35.134758   26315 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 20:01:35.134778   26315 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 20:01:35.134789   26315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 20:01:35.134797   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.137025   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.137368   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.137394   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.137501   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.137683   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.137839   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.137997   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.138162   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.138394   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.138405   26315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 20:01:35.238733   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:01:35.238763   26315 main.go:141] libmachine: Detecting the provisioner...
	I0930 20:01:35.238775   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.242022   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.242527   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.242562   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.242839   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.243050   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.243235   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.243427   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.243630   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.243832   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.243850   26315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 20:01:35.348183   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 20:01:35.348252   26315 main.go:141] libmachine: found compatible host: buildroot
	I0930 20:01:35.348261   26315 main.go:141] libmachine: Provisioning with buildroot...
	I0930 20:01:35.348268   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:35.348498   26315 buildroot.go:166] provisioning hostname "ha-805293-m03"
	I0930 20:01:35.348524   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:35.348749   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.351890   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.352398   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.352424   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.352577   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.352756   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.352894   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.353007   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.353167   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.353367   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.353384   26315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293-m03 && echo "ha-805293-m03" | sudo tee /etc/hostname
	I0930 20:01:35.473967   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293-m03
	
	I0930 20:01:35.473997   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.476729   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.477054   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.477085   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.477369   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.477567   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.477748   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.477907   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.478077   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.478253   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.478270   26315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:01:35.591650   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:01:35.591680   26315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:01:35.591697   26315 buildroot.go:174] setting up certificates
	I0930 20:01:35.591707   26315 provision.go:84] configureAuth start
	I0930 20:01:35.591715   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetMachineName
	I0930 20:01:35.591952   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:35.594901   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.595262   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.595286   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.595420   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.598100   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.598602   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.598626   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.598829   26315 provision.go:143] copyHostCerts
	I0930 20:01:35.598868   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:01:35.598917   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:01:35.598931   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:01:35.599012   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:01:35.599111   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:01:35.599134   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:01:35.599141   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:01:35.599179   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:01:35.599243   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:01:35.599270   26315 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:01:35.599279   26315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:01:35.599331   26315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:01:35.599408   26315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293-m03 san=[127.0.0.1 192.168.39.227 ha-805293-m03 localhost minikube]
	I0930 20:01:35.796149   26315 provision.go:177] copyRemoteCerts
	I0930 20:01:35.796206   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:01:35.796242   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.798946   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.799340   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.799368   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.799648   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.799848   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.800023   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.800180   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:35.882427   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 20:01:35.882508   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:01:35.906794   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 20:01:35.906860   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 20:01:35.932049   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 20:01:35.932131   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 20:01:35.957426   26315 provision.go:87] duration metric: took 365.707269ms to configureAuth
	I0930 20:01:35.957459   26315 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:01:35.957679   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:01:35.957795   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:35.960499   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.960961   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:35.960996   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:35.961176   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:35.961403   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.961575   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:35.961765   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:35.961966   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:35.962139   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:35.962153   26315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:01:36.182253   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:01:36.182280   26315 main.go:141] libmachine: Checking connection to Docker...
	I0930 20:01:36.182288   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetURL
	I0930 20:01:36.183907   26315 main.go:141] libmachine: (ha-805293-m03) DBG | Using libvirt version 6000000
	I0930 20:01:36.186215   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.186549   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.186590   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.186762   26315 main.go:141] libmachine: Docker is up and running!
	I0930 20:01:36.186776   26315 main.go:141] libmachine: Reticulating splines...
	I0930 20:01:36.186783   26315 client.go:171] duration metric: took 22.235285837s to LocalClient.Create
	I0930 20:01:36.186801   26315 start.go:167] duration metric: took 22.235357522s to libmachine.API.Create "ha-805293"
	I0930 20:01:36.186810   26315 start.go:293] postStartSetup for "ha-805293-m03" (driver="kvm2")
	I0930 20:01:36.186826   26315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:01:36.186842   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.187054   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:01:36.187077   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:36.189228   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.189551   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.189577   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.189754   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.189932   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.190098   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.190211   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:36.269942   26315 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:01:36.274174   26315 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:01:36.274204   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:01:36.274281   26315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:01:36.274373   26315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:01:36.274383   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 20:01:36.274490   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:01:36.284037   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:01:36.308961   26315 start.go:296] duration metric: took 122.135978ms for postStartSetup
	I0930 20:01:36.309010   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetConfigRaw
	I0930 20:01:36.309613   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:36.312777   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.313257   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.313307   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.313687   26315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:01:36.313894   26315 start.go:128] duration metric: took 22.382961104s to createHost
	I0930 20:01:36.313917   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:36.316229   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.316599   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.316627   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.316783   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.316957   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.317109   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.317219   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.317366   26315 main.go:141] libmachine: Using SSH client type: native
	I0930 20:01:36.317526   26315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0930 20:01:36.317537   26315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:01:36.419858   26315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727726496.392744661
	
	I0930 20:01:36.419877   26315 fix.go:216] guest clock: 1727726496.392744661
	I0930 20:01:36.419884   26315 fix.go:229] Guest: 2024-09-30 20:01:36.392744661 +0000 UTC Remote: 2024-09-30 20:01:36.313905276 +0000 UTC m=+139.884995221 (delta=78.839385ms)
	I0930 20:01:36.419899   26315 fix.go:200] guest clock delta is within tolerance: 78.839385ms
	I0930 20:01:36.419904   26315 start.go:83] releasing machines lock for "ha-805293-m03", held for 22.489079696s
	I0930 20:01:36.419932   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.420201   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:36.422678   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.423024   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.423063   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.425360   26315 out.go:177] * Found network options:
	I0930 20:01:36.426711   26315 out.go:177]   - NO_PROXY=192.168.39.3,192.168.39.220
	W0930 20:01:36.427962   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 20:01:36.427990   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:01:36.428012   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.428657   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.428857   26315 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:01:36.428967   26315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:01:36.429007   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	W0930 20:01:36.429092   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 20:01:36.429124   26315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 20:01:36.429190   26315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:01:36.429211   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:01:36.431941   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432202   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432300   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.432322   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432458   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.432598   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.432659   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:36.432683   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:36.432755   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.432845   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:01:36.432915   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:36.432995   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:01:36.433083   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:01:36.433164   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:01:36.661994   26315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:01:36.669285   26315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:01:36.669354   26315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:01:36.686879   26315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 20:01:36.686911   26315 start.go:495] detecting cgroup driver to use...
	I0930 20:01:36.687008   26315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:01:36.703695   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:01:36.717831   26315 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:01:36.717898   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:01:36.732194   26315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:01:36.746205   26315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:01:36.873048   26315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:01:37.031067   26315 docker.go:233] disabling docker service ...
	I0930 20:01:37.031142   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:01:37.047034   26315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:01:37.059962   26315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:01:37.191501   26315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:01:37.302357   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:01:37.316910   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:01:37.336669   26315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:01:37.336739   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.347286   26315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:01:37.347361   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.357984   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.368059   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.379248   26315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:01:37.390460   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.401206   26315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.418758   26315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:01:37.428841   26315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:01:37.438255   26315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 20:01:37.438328   26315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 20:01:37.451070   26315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:01:37.460818   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:01:37.578097   26315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:01:37.670992   26315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:01:37.671072   26315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:01:37.675792   26315 start.go:563] Will wait 60s for crictl version
	I0930 20:01:37.675847   26315 ssh_runner.go:195] Run: which crictl
	I0930 20:01:37.679190   26315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:01:37.718042   26315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:01:37.718121   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:01:37.745873   26315 ssh_runner.go:195] Run: crio --version
	I0930 20:01:37.774031   26315 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:01:37.775415   26315 out.go:177]   - env NO_PROXY=192.168.39.3
	I0930 20:01:37.776644   26315 out.go:177]   - env NO_PROXY=192.168.39.3,192.168.39.220
	I0930 20:01:37.777763   26315 main.go:141] libmachine: (ha-805293-m03) Calling .GetIP
	I0930 20:01:37.780596   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:37.780948   26315 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:01:37.780970   26315 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:01:37.781145   26315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:01:37.785213   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:01:37.797526   26315 mustload.go:65] Loading cluster: ha-805293
	I0930 20:01:37.797767   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:01:37.798120   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:37.798167   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:37.813162   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46385
	I0930 20:01:37.813567   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:37.814037   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:37.814052   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:37.814397   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:37.814604   26315 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:01:37.816041   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:01:37.816336   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:37.816371   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:37.831585   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37645
	I0930 20:01:37.832045   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:37.832532   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:37.832557   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:37.832860   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:37.833026   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:01:37.833192   26315 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.227
	I0930 20:01:37.833209   26315 certs.go:194] generating shared ca certs ...
	I0930 20:01:37.833229   26315 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:01:37.833416   26315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:01:37.833471   26315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:01:37.833484   26315 certs.go:256] generating profile certs ...
	I0930 20:01:37.833587   26315 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 20:01:37.833619   26315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55
	I0930 20:01:37.833638   26315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.220 192.168.39.227 192.168.39.254]
	I0930 20:01:38.116566   26315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55 ...
	I0930 20:01:38.116596   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55: {Name:mkc0cd033bb8a494a4cf8a08dfd67f55b67932e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:01:38.116763   26315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55 ...
	I0930 20:01:38.116776   26315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55: {Name:mk85317566d0a2f89680d96c44f0e865cd88a3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:01:38.116847   26315 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.07a59e55 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 20:01:38.116983   26315 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.07a59e55 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 20:01:38.117102   26315 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 20:01:38.117117   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 20:01:38.117131   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 20:01:38.117145   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 20:01:38.117158   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 20:01:38.117175   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 20:01:38.117187   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 20:01:38.117198   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 20:01:38.131699   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 20:01:38.131811   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:01:38.131856   26315 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:01:38.131870   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:01:38.131902   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:01:38.131926   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:01:38.131956   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:01:38.132010   26315 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:01:38.132045   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.132066   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.132084   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.132129   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:01:38.135411   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:38.135848   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:01:38.135875   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:38.136103   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:01:38.136307   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:01:38.136477   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:01:38.136602   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:01:38.215899   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 20:01:38.221340   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 20:01:38.232045   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 20:01:38.236011   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 20:01:38.247009   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 20:01:38.250999   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 20:01:38.261524   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 20:01:38.265766   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0930 20:01:38.275973   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 20:01:38.279940   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 20:01:38.289617   26315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 20:01:38.293330   26315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0930 20:01:38.303037   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:01:38.328067   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:01:38.353124   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:01:38.377109   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:01:38.402737   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 20:01:38.432128   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 20:01:38.459728   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:01:38.484047   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 20:01:38.508033   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:01:38.530855   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:01:38.554688   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:01:38.579730   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 20:01:38.595907   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 20:01:38.611657   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 20:01:38.627976   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0930 20:01:38.644290   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 20:01:38.662490   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0930 20:01:38.678795   26315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 20:01:38.694165   26315 ssh_runner.go:195] Run: openssl version
	I0930 20:01:38.699696   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:01:38.709850   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.714078   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.714128   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:01:38.719944   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:01:38.730979   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:01:38.741564   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.746132   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.746193   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:01:38.751872   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:01:38.763738   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:01:38.775831   26315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.780819   26315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.780877   26315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:01:38.786554   26315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:01:38.797347   26315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:01:38.801341   26315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 20:01:38.801400   26315 kubeadm.go:934] updating node {m03 192.168.39.227 8443 v1.31.1 crio true true} ...
	I0930 20:01:38.801503   26315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:01:38.801529   26315 kube-vip.go:115] generating kube-vip config ...
	I0930 20:01:38.801578   26315 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 20:01:38.819903   26315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 20:01:38.819976   26315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 20:01:38.820036   26315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:01:38.830324   26315 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 20:01:38.830375   26315 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 20:01:38.842272   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0930 20:01:38.842334   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:01:38.842272   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 20:01:38.842272   26315 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0930 20:01:38.842419   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:01:38.842439   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:01:38.842489   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 20:01:38.842540   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 20:01:38.861520   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 20:01:38.861559   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 20:01:38.861581   26315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:01:38.861631   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 20:01:38.861657   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 20:01:38.861689   26315 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 20:01:38.875651   26315 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 20:01:38.875695   26315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 20:01:39.808722   26315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 20:01:39.819615   26315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 20:01:39.836414   26315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:01:39.853331   26315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 20:01:39.869585   26315 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 20:01:39.873243   26315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:01:39.884957   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:01:40.006850   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:01:40.022775   26315 host.go:66] Checking if "ha-805293" exists ...
	I0930 20:01:40.023225   26315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:01:40.023284   26315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:01:40.040829   26315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0930 20:01:40.041301   26315 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:01:40.041861   26315 main.go:141] libmachine: Using API Version  1
	I0930 20:01:40.041890   26315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:01:40.042247   26315 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:01:40.042469   26315 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:01:40.042649   26315 start.go:317] joinCluster: &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:01:40.042812   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 20:01:40.042834   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:01:40.046258   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:40.046800   26315 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:01:40.046821   26315 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:01:40.047017   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:01:40.047286   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:01:40.047660   26315 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:01:40.047833   26315 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:01:40.209323   26315 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:01:40.209377   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1eegwc.d3x1pf4onbzzskk3 --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m03 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443"
	I0930 20:02:03.693864   26315 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1eegwc.d3x1pf4onbzzskk3 --discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-805293-m03 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443": (23.484455167s)
	I0930 20:02:03.693901   26315 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 20:02:04.227863   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-805293-m03 minikube.k8s.io/updated_at=2024_09_30T20_02_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=ha-805293 minikube.k8s.io/primary=false
	I0930 20:02:04.356839   26315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-805293-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 20:02:04.460804   26315 start.go:319] duration metric: took 24.418151981s to joinCluster
	I0930 20:02:04.460890   26315 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:02:04.461213   26315 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:02:04.462900   26315 out.go:177] * Verifying Kubernetes components...
	I0930 20:02:04.464457   26315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:02:04.710029   26315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:02:04.776170   26315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:02:04.776405   26315 kapi.go:59] client config for ha-805293: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 20:02:04.776460   26315 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.3:8443
	I0930 20:02:04.776741   26315 node_ready.go:35] waiting up to 6m0s for node "ha-805293-m03" to be "Ready" ...
	I0930 20:02:04.776826   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:04.776836   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:04.776843   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:04.776849   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:04.780756   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:05.277289   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:05.277316   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:05.277328   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:05.277336   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:05.280839   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:05.777768   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:05.777793   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:05.777802   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:05.777810   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:05.781540   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:06.277679   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:06.277703   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:06.277713   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:06.277719   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:06.281145   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:06.777911   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:06.777937   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:06.777949   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:06.777955   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:06.781669   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:06.782486   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:07.277405   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:07.277428   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:07.277435   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:07.277438   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:07.281074   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:07.776952   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:07.776984   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:07.777005   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:07.777010   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:07.780689   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:08.277555   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:08.277576   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:08.277583   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:08.277587   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:08.283539   26315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 20:02:08.777360   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:08.777381   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:08.777390   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:08.777394   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:08.780937   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:09.277721   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:09.277758   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:09.277768   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:09.277772   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:09.285233   26315 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 20:02:09.285662   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:09.776955   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:09.776977   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:09.776987   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:09.776992   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:09.781593   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:10.277015   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:10.277033   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:10.277045   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:10.277049   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:10.281851   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:10.777471   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:10.777502   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:10.777513   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:10.777518   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:10.780948   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:11.277959   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:11.277977   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:11.277985   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:11.277989   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:11.401106   26315 round_trippers.go:574] Response Status: 200 OK in 123 milliseconds
	I0930 20:02:11.401822   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:11.777418   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:11.777439   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:11.777447   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:11.777451   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:11.780577   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:12.277563   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:12.277586   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:12.277594   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:12.277600   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:12.280508   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:12.777614   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:12.777635   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:12.777644   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:12.777649   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:12.780589   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:13.277609   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:13.277647   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:13.277658   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:13.277664   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:13.280727   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:13.777657   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:13.777684   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:13.777692   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:13.777699   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:13.781417   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:13.781894   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:14.277640   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:14.277665   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:14.277674   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:14.277678   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:14.281731   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:14.777599   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:14.777622   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:14.777633   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:14.777638   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:14.780768   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:15.277270   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:15.277293   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:15.277302   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:15.277308   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:15.281504   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:15.777339   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:15.777363   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:15.777374   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:15.777380   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:15.780737   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:16.277475   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:16.277500   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:16.277508   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:16.277513   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:16.281323   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:16.281879   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:16.777003   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:16.777026   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:16.777033   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:16.777038   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:16.780794   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:17.277324   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:17.277345   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:17.277353   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:17.277362   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:17.281320   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:17.777286   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:17.777313   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:17.777323   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:17.777329   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:17.781420   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:18.277338   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:18.277361   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:18.277369   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:18.277374   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:18.280798   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:18.777933   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:18.777955   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:18.777963   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:18.777967   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:18.781895   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:18.782295   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:19.277039   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:19.277062   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:19.277070   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:19.277074   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:19.280872   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:19.776906   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:19.776931   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:19.776941   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:19.776945   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:19.789070   26315 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0930 20:02:20.277619   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:20.277645   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:20.277657   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:20.277664   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:20.281050   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:20.777108   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:20.777132   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:20.777140   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:20.777145   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:20.780896   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:21.277715   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:21.277737   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:21.277746   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:21.277750   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:21.281198   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:21.281766   26315 node_ready.go:53] node "ha-805293-m03" has status "Ready":"False"
	I0930 20:02:21.777774   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:21.777798   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:21.777812   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:21.777818   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:21.781858   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:22.277699   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:22.277726   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.277737   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.277741   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.281520   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.777562   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:22.777588   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.777599   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.777606   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.781172   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.781900   26315 node_ready.go:49] node "ha-805293-m03" has status "Ready":"True"
	I0930 20:02:22.781919   26315 node_ready.go:38] duration metric: took 18.00516261s for node "ha-805293-m03" to be "Ready" ...
	I0930 20:02:22.781930   26315 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:02:22.782018   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:22.782034   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.782045   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.782050   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.788078   26315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 20:02:22.794707   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.794792   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-x7zjp
	I0930 20:02:22.794802   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.794843   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.794851   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.798283   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.799034   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:22.799049   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.799059   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.799063   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.802512   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.803017   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.803034   26315 pod_ready.go:82] duration metric: took 8.303758ms for pod "coredns-7c65d6cfc9-x7zjp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.803043   26315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.803100   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-z4bkv
	I0930 20:02:22.803108   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.803115   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.803120   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.805708   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.806288   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:22.806303   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.806309   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.806314   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.808794   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.809193   26315 pod_ready.go:93] pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.809210   26315 pod_ready.go:82] duration metric: took 6.159698ms for pod "coredns-7c65d6cfc9-z4bkv" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.809221   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.809280   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293
	I0930 20:02:22.809291   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.809302   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.809310   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.811844   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.812420   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:22.812435   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.812441   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.812443   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.814572   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.815425   26315 pod_ready.go:93] pod "etcd-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.815446   26315 pod_ready.go:82] duration metric: took 6.21739ms for pod "etcd-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.815467   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.815571   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m02
	I0930 20:02:22.815579   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.815589   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.815596   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.819297   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:22.820054   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:22.820071   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.820078   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.820082   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.822946   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:22.823362   26315 pod_ready.go:93] pod "etcd-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:22.823377   26315 pod_ready.go:82] duration metric: took 7.903457ms for pod "etcd-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.823386   26315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:22.977860   26315 request.go:632] Waited for 154.412889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m03
	I0930 20:02:22.977929   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-805293-m03
	I0930 20:02:22.977936   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:22.977947   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:22.977956   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:22.981875   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.177702   26315 request.go:632] Waited for 195.197886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:23.177761   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:23.177766   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.177774   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.177779   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.180898   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.181332   26315 pod_ready.go:93] pod "etcd-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:23.181350   26315 pod_ready.go:82] duration metric: took 357.955948ms for pod "etcd-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.181366   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.377609   26315 request.go:632] Waited for 196.161944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:02:23.377673   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293
	I0930 20:02:23.377681   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.377691   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.377697   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.381213   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.578424   26315 request.go:632] Waited for 196.368077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:23.578500   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:23.578506   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.578514   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.578528   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.581799   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.582390   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:23.582406   26315 pod_ready.go:82] duration metric: took 401.034594ms for pod "kube-apiserver-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.582416   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.778543   26315 request.go:632] Waited for 196.052617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:02:23.778624   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m02
	I0930 20:02:23.778633   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.778643   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.778653   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.781828   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.977855   26315 request.go:632] Waited for 195.382083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:23.977924   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:23.977944   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:23.977959   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:23.977965   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:23.981372   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:23.982066   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:23.982087   26315 pod_ready.go:82] duration metric: took 399.664005ms for pod "kube-apiserver-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:23.982100   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.178123   26315 request.go:632] Waited for 195.960731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m03
	I0930 20:02:24.178196   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-805293-m03
	I0930 20:02:24.178203   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.178211   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.178236   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.182112   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:24.378558   26315 request.go:632] Waited for 195.433009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:24.378638   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:24.378643   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.378650   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.378656   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.382291   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:24.382917   26315 pod_ready.go:93] pod "kube-apiserver-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:24.382938   26315 pod_ready.go:82] duration metric: took 400.829354ms for pod "kube-apiserver-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.382948   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.577887   26315 request.go:632] Waited for 194.863294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:02:24.577956   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293
	I0930 20:02:24.577963   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.577971   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.577978   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.581564   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:24.778150   26315 request.go:632] Waited for 195.36459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:24.778203   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:24.778208   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.778216   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.778221   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.781210   26315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 20:02:24.781808   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:24.781826   26315 pod_ready.go:82] duration metric: took 398.871488ms for pod "kube-controller-manager-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.781839   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:24.977967   26315 request.go:632] Waited for 196.028192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:02:24.978039   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m02
	I0930 20:02:24.978046   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:24.978055   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:24.978062   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:24.981635   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.177628   26315 request.go:632] Waited for 195.118197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:25.177702   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:25.177707   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.177715   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.177722   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.184032   26315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 20:02:25.185117   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:25.185151   26315 pod_ready.go:82] duration metric: took 403.303748ms for pod "kube-controller-manager-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.185168   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.378088   26315 request.go:632] Waited for 192.829504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m03
	I0930 20:02:25.378247   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-805293-m03
	I0930 20:02:25.378262   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.378274   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.378284   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.382197   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.578183   26315 request.go:632] Waited for 195.374549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:25.578237   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:25.578241   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.578249   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.578273   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.581302   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.581967   26315 pod_ready.go:93] pod "kube-controller-manager-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:25.581990   26315 pod_ready.go:82] duration metric: took 396.812632ms for pod "kube-controller-manager-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.582004   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.778066   26315 request.go:632] Waited for 195.961131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:02:25.778120   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gnt4
	I0930 20:02:25.778125   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.778132   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.778136   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.781487   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.977671   26315 request.go:632] Waited for 195.30691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:25.977755   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:25.977762   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:25.977769   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:25.977775   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:25.981674   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:25.982338   26315 pod_ready.go:93] pod "kube-proxy-6gnt4" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:25.982360   26315 pod_ready.go:82] duration metric: took 400.349266ms for pod "kube-proxy-6gnt4" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:25.982370   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b9cpp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.178400   26315 request.go:632] Waited for 195.958284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b9cpp
	I0930 20:02:26.178455   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b9cpp
	I0930 20:02:26.178460   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.178468   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.178474   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.181740   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.377643   26315 request.go:632] Waited for 195.301602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:26.377715   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:26.377720   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.377730   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.377736   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.381534   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.382336   26315 pod_ready.go:93] pod "kube-proxy-b9cpp" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:26.382356   26315 pod_ready.go:82] duration metric: took 399.97947ms for pod "kube-proxy-b9cpp" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.382369   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.578135   26315 request.go:632] Waited for 195.696435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:02:26.578222   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vptrg
	I0930 20:02:26.578231   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.578239   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.578246   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.581969   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.778092   26315 request.go:632] Waited for 195.270119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:26.778175   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:26.778183   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.778194   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.778204   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.781951   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:26.782497   26315 pod_ready.go:93] pod "kube-proxy-vptrg" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:26.782530   26315 pod_ready.go:82] duration metric: took 400.140578ms for pod "kube-proxy-vptrg" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.782542   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:26.978290   26315 request.go:632] Waited for 195.637761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:02:26.978361   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293
	I0930 20:02:26.978368   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:26.978377   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:26.978381   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:26.982459   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:27.178413   26315 request.go:632] Waited for 195.235139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:27.178464   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293
	I0930 20:02:27.178469   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.178476   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.178479   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.182089   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.182674   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:27.182695   26315 pod_ready.go:82] duration metric: took 400.147259ms for pod "kube-scheduler-ha-805293" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.182706   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.377673   26315 request.go:632] Waited for 194.89364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:02:27.377752   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m02
	I0930 20:02:27.377758   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.377765   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.377769   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.381356   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.578554   26315 request.go:632] Waited for 196.443432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:27.578622   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m02
	I0930 20:02:27.578630   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.578641   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.578647   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.582325   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.582942   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:27.582965   26315 pod_ready.go:82] duration metric: took 400.251961ms for pod "kube-scheduler-ha-805293-m02" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.582978   26315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.778055   26315 request.go:632] Waited for 195.008545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m03
	I0930 20:02:27.778129   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-805293-m03
	I0930 20:02:27.778135   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.778142   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.778147   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.782023   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.977660   26315 request.go:632] Waited for 194.950522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:27.977742   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/ha-805293-m03
	I0930 20:02:27.977752   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:27.977762   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:27.977769   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:27.981329   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:27.981878   26315 pod_ready.go:93] pod "kube-scheduler-ha-805293-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 20:02:27.981905   26315 pod_ready.go:82] duration metric: took 398.919132ms for pod "kube-scheduler-ha-805293-m03" in "kube-system" namespace to be "Ready" ...
	I0930 20:02:27.981920   26315 pod_ready.go:39] duration metric: took 5.199971217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:02:27.981939   26315 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:02:27.982009   26315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:02:27.999589   26315 api_server.go:72] duration metric: took 23.538667198s to wait for apiserver process to appear ...
	I0930 20:02:27.999616   26315 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:02:27.999635   26315 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0930 20:02:28.006690   26315 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I0930 20:02:28.006768   26315 round_trippers.go:463] GET https://192.168.39.3:8443/version
	I0930 20:02:28.006788   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.006799   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.006804   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.008072   26315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0930 20:02:28.008144   26315 api_server.go:141] control plane version: v1.31.1
	I0930 20:02:28.008163   26315 api_server.go:131] duration metric: took 8.540356ms to wait for apiserver health ...
	I0930 20:02:28.008173   26315 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:02:28.178582   26315 request.go:632] Waited for 170.336703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.178653   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.178673   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.178683   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.178688   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.186196   26315 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 20:02:28.192615   26315 system_pods.go:59] 24 kube-system pods found
	I0930 20:02:28.192646   26315 system_pods.go:61] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:02:28.192651   26315 system_pods.go:61] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:02:28.192656   26315 system_pods.go:61] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:02:28.192661   26315 system_pods.go:61] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:02:28.192665   26315 system_pods.go:61] "etcd-ha-805293-m03" [c87078d8-ee99-4a5f-9258-cf5d7e658388] Running
	I0930 20:02:28.192668   26315 system_pods.go:61] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:02:28.192671   26315 system_pods.go:61] "kindnet-qrhb8" [852c4080-9210-47bb-a06a-d1b8bcff580d] Running
	I0930 20:02:28.192675   26315 system_pods.go:61] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:02:28.192679   26315 system_pods.go:61] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:02:28.192682   26315 system_pods.go:61] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:02:28.192687   26315 system_pods.go:61] "kube-apiserver-ha-805293-m03" [6fb5a285-7f35-4eb2-b028-6bd9fcfd21fe] Running
	I0930 20:02:28.192691   26315 system_pods.go:61] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:02:28.192695   26315 system_pods.go:61] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:02:28.192698   26315 system_pods.go:61] "kube-controller-manager-ha-805293-m03" [35d67e4a-f434-49df-8fb9-c6fcc725d8ff] Running
	I0930 20:02:28.192702   26315 system_pods.go:61] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:02:28.192706   26315 system_pods.go:61] "kube-proxy-b9cpp" [c828ff6a-6cbb-4a29-84bc-118522687da8] Running
	I0930 20:02:28.192710   26315 system_pods.go:61] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:02:28.192714   26315 system_pods.go:61] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:02:28.192720   26315 system_pods.go:61] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:02:28.192723   26315 system_pods.go:61] "kube-scheduler-ha-805293-m03" [34e2edf8-ca25-4a7c-a626-ac037b40b905] Running
	I0930 20:02:28.192729   26315 system_pods.go:61] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:02:28.192732   26315 system_pods.go:61] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:02:28.192735   26315 system_pods.go:61] "kube-vip-ha-805293-m03" [fcc5a165-5430-45d3-8ec7-fbdf5adc7e20] Running
	I0930 20:02:28.192738   26315 system_pods.go:61] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:02:28.192747   26315 system_pods.go:74] duration metric: took 184.564973ms to wait for pod list to return data ...
	I0930 20:02:28.192756   26315 default_sa.go:34] waiting for default service account to be created ...
	I0930 20:02:28.378324   26315 request.go:632] Waited for 185.488908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:02:28.378382   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I0930 20:02:28.378387   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.378394   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.378398   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.382352   26315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 20:02:28.382515   26315 default_sa.go:45] found service account: "default"
	I0930 20:02:28.382532   26315 default_sa.go:55] duration metric: took 189.767008ms for default service account to be created ...
	I0930 20:02:28.382546   26315 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 20:02:28.578010   26315 request.go:632] Waited for 195.370903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.578070   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I0930 20:02:28.578076   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.578083   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.578087   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.584177   26315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 20:02:28.592272   26315 system_pods.go:86] 24 kube-system pods found
	I0930 20:02:28.592310   26315 system_pods.go:89] "coredns-7c65d6cfc9-x7zjp" [b5b20ed2-1d94-49b9-ab9e-17e27d1012d0] Running
	I0930 20:02:28.592319   26315 system_pods.go:89] "coredns-7c65d6cfc9-z4bkv" [c6ba0288-138e-4690-a68d-6d6378e28deb] Running
	I0930 20:02:28.592330   26315 system_pods.go:89] "etcd-ha-805293" [399ae7f6-cec9-4e8d-bda2-6c85dbcc5613] Running
	I0930 20:02:28.592336   26315 system_pods.go:89] "etcd-ha-805293-m02" [06ff461f-0ed1-4010-bcf7-1e82e4a589eb] Running
	I0930 20:02:28.592341   26315 system_pods.go:89] "etcd-ha-805293-m03" [c87078d8-ee99-4a5f-9258-cf5d7e658388] Running
	I0930 20:02:28.592346   26315 system_pods.go:89] "kindnet-lfldt" [62cfaae6-e635-4ba4-a0db-77d008d12706] Running
	I0930 20:02:28.592351   26315 system_pods.go:89] "kindnet-qrhb8" [852c4080-9210-47bb-a06a-d1b8bcff580d] Running
	I0930 20:02:28.592357   26315 system_pods.go:89] "kindnet-slhtm" [a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88] Running
	I0930 20:02:28.592363   26315 system_pods.go:89] "kube-apiserver-ha-805293" [e975ca94-0069-4dfc-bc42-fa14fff226d5] Running
	I0930 20:02:28.592368   26315 system_pods.go:89] "kube-apiserver-ha-805293-m02" [c0f6d06d-f2d3-4796-ba43-16db58da16f7] Running
	I0930 20:02:28.592374   26315 system_pods.go:89] "kube-apiserver-ha-805293-m03" [6fb5a285-7f35-4eb2-b028-6bd9fcfd21fe] Running
	I0930 20:02:28.592381   26315 system_pods.go:89] "kube-controller-manager-ha-805293" [01616da3-61eb-494b-a55c-28acaa308938] Running
	I0930 20:02:28.592388   26315 system_pods.go:89] "kube-controller-manager-ha-805293-m02" [14e035c1-fd94-43ab-aa98-3f20108eba57] Running
	I0930 20:02:28.592397   26315 system_pods.go:89] "kube-controller-manager-ha-805293-m03" [35d67e4a-f434-49df-8fb9-c6fcc725d8ff] Running
	I0930 20:02:28.592404   26315 system_pods.go:89] "kube-proxy-6gnt4" [a90b0c3f-e9c3-4cb9-8773-8253bd72ab51] Running
	I0930 20:02:28.592410   26315 system_pods.go:89] "kube-proxy-b9cpp" [c828ff6a-6cbb-4a29-84bc-118522687da8] Running
	I0930 20:02:28.592416   26315 system_pods.go:89] "kube-proxy-vptrg" [324c92ea-b82f-4efa-b63c-4c590bbf214d] Running
	I0930 20:02:28.592422   26315 system_pods.go:89] "kube-scheduler-ha-805293" [fbff9dea-1599-43ab-bb92-df8c5231bb87] Running
	I0930 20:02:28.592430   26315 system_pods.go:89] "kube-scheduler-ha-805293-m02" [9e69f915-83ac-48de-9bd6-3d245a2e82be] Running
	I0930 20:02:28.592436   26315 system_pods.go:89] "kube-scheduler-ha-805293-m03" [34e2edf8-ca25-4a7c-a626-ac037b40b905] Running
	I0930 20:02:28.592442   26315 system_pods.go:89] "kube-vip-ha-805293" [9c629f9e-1b42-4680-9fd8-2dae4cec07f8] Running
	I0930 20:02:28.592450   26315 system_pods.go:89] "kube-vip-ha-805293-m02" [ec99538b-4f84-4078-b64d-23086cbf2c45] Running
	I0930 20:02:28.592455   26315 system_pods.go:89] "kube-vip-ha-805293-m03" [fcc5a165-5430-45d3-8ec7-fbdf5adc7e20] Running
	I0930 20:02:28.592461   26315 system_pods.go:89] "storage-provisioner" [1912fdf8-d789-4ba9-99ff-c87ccbf330ec] Running
	I0930 20:02:28.592472   26315 system_pods.go:126] duration metric: took 209.917591ms to wait for k8s-apps to be running ...
	I0930 20:02:28.592485   26315 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 20:02:28.592534   26315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:02:28.608637   26315 system_svc.go:56] duration metric: took 16.145321ms WaitForService to wait for kubelet
	I0930 20:02:28.608674   26315 kubeadm.go:582] duration metric: took 24.147753749s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:02:28.608696   26315 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:02:28.778132   26315 request.go:632] Waited for 169.34168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes
	I0930 20:02:28.778186   26315 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I0930 20:02:28.778191   26315 round_trippers.go:469] Request Headers:
	I0930 20:02:28.778198   26315 round_trippers.go:473]     Accept: application/json, */*
	I0930 20:02:28.778202   26315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 20:02:28.782435   26315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 20:02:28.783582   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:02:28.783605   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:02:28.783617   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:02:28.783621   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:02:28.783625   26315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:02:28.783628   26315 node_conditions.go:123] node cpu capacity is 2
	I0930 20:02:28.783633   26315 node_conditions.go:105] duration metric: took 174.931399ms to run NodePressure ...
	I0930 20:02:28.783649   26315 start.go:241] waiting for startup goroutines ...
	I0930 20:02:28.783678   26315 start.go:255] writing updated cluster config ...
	I0930 20:02:28.783989   26315 ssh_runner.go:195] Run: rm -f paused
	I0930 20:02:28.838018   26315 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 20:02:28.840509   26315 out.go:177] * Done! kubectl is now configured to use "ha-805293" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.061665475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726787061632418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1025a29c-0325-4134-9a02-484b9261334c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.062466394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55259a91-dbf9-44c7-959c-24c830687297 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.062532498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55259a91-dbf9-44c7-959c-24c830687297 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.062775936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55259a91-dbf9-44c7-959c-24c830687297 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.100045619Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe96b0b1-88d2-448a-bd65-572d73d31ff7 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.100134858Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe96b0b1-88d2-448a-bd65-572d73d31ff7 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.101269930Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4eff58f0-f5c6-46df-9b05-9f3848b142b3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.101791761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726787101769255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4eff58f0-f5c6-46df-9b05-9f3848b142b3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.102197709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d8efbfe-8f3f-4dd4-9d4c-dc642bb2b8dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.102249709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d8efbfe-8f3f-4dd4-9d4c-dc642bb2b8dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.102536807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d8efbfe-8f3f-4dd4-9d4c-dc642bb2b8dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.140184864Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc62406f-9466-45bf-a404-35ecc259970e name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.140268696Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc62406f-9466-45bf-a404-35ecc259970e name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.141222229Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c0419c9-1005-4e98-ad02-fc33ef87c980 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.141700610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726787141679692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c0419c9-1005-4e98-ad02-fc33ef87c980 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.142185906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d9024a8-6fe9-49a7-a05b-0854520f9eca name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.142253804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d9024a8-6fe9-49a7-a05b-0854520f9eca name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.142569268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d9024a8-6fe9-49a7-a05b-0854520f9eca name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.181064547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc7a6a63-51c0-4bdf-9355-903f48e05431 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.181156086Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc7a6a63-51c0-4bdf-9355-903f48e05431 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.182187804Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac18de8c-abee-4521-b65a-de7a80b02c17 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.182751620Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726787182727321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac18de8c-abee-4521-b65a-de7a80b02c17 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.183245071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88417dea-a473-4723-a38d-9c3d6ead49aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.183381554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88417dea-a473-4723-a38d-9c3d6ead49aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:06:27 ha-805293 crio[655]: time="2024-09-30 20:06:27.183630405Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727726553788768842,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414310017018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d01ed71d852eed61bb80348ffe7fb51d168d95e1306c1563c1f48e5dbbf8f2c,PodSandboxId:2a39bd6449f5ae769d104fbeb8e59e2f8144520dfc21ce04f986400da9c5cf45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727726414272318094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727726414250119749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-13
8e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17277264
02286671649,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727726402007379257,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe,PodSandboxId:1fd2dbf5f5af033b5a3e52b79c474bc1a4f59060eca81c998f7ec1a08b0bd020,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727726392774120477,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ab114a2582827f884939bc3a1a2f15f,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727726390313369486,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963,PodSandboxId:6fc84ff2f4f9e09491da5bb8f4fa755e40a60c0bec559ecff99973cd8d2fbbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727726390327177630,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727726390230461135,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78,PodSandboxId:ec25e9867db7c44002a733caaf53a3e32f3ab4c28faa3767e1bca353d80692e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727726390173703617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88417dea-a473-4723-a38d-9c3d6ead49aa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10ee59c77c769       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   a8d4349f6e0b0       busybox-7dff88458-r27jf
	8c540e4668f99       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   f95d30afc0491       coredns-7c65d6cfc9-x7zjp
	6d01ed71d852e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   2a39bd6449f5a       storage-provisioner
	beba42a2bf035       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   626fdaeb1b142       coredns-7c65d6cfc9-z4bkv
	e28b6781ed449       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   36a3293339cae       kindnet-slhtm
	cd73b6dc43348       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   27a0913ae182a       kube-proxy-6gnt4
	5e8e1f537ce94       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   1fd2dbf5f5af0       kube-vip-ha-805293
	0e9fbbe2017da       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   6fc84ff2f4f9e       kube-controller-manager-ha-805293
	9b8d5baa6998a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   73733467afdd9       kube-scheduler-ha-805293
	219dff1c43cd4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   bff718c807eb7       etcd-ha-805293
	994c927aa147a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   ec25e9867db7c       kube-apiserver-ha-805293
	
	
	==> coredns [8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b] <==
	[INFO] 10.244.0.4:54656 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002122445s
	[INFO] 10.244.1.2:43325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000298961s
	[INFO] 10.244.1.2:50368 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261008s
	[INFO] 10.244.1.2:34858 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000270623s
	[INFO] 10.244.1.2:59975 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192447s
	[INFO] 10.244.2.2:37486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233576s
	[INFO] 10.244.2.2:40647 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002177996s
	[INFO] 10.244.2.2:39989 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196915s
	[INFO] 10.244.2.2:42105 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001612348s
	[INFO] 10.244.2.2:42498 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180331s
	[INFO] 10.244.2.2:34873 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000262642s
	[INFO] 10.244.0.4:55282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002337707s
	[INFO] 10.244.0.4:52721 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082276s
	[INFO] 10.244.0.4:33773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001975703s
	[INFO] 10.244.0.4:44087 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095899s
	[INFO] 10.244.1.2:44456 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189431s
	[INFO] 10.244.1.2:52532 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112979s
	[INFO] 10.244.1.2:39707 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095712s
	[INFO] 10.244.2.2:42900 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101241s
	[INFO] 10.244.0.4:56608 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134276s
	[INFO] 10.244.1.2:35939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00031266s
	[INFO] 10.244.1.2:48131 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196792s
	[INFO] 10.244.2.2:40732 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154649s
	[INFO] 10.244.0.4:51180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000206094s
	[INFO] 10.244.0.4:36921 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000118718s
	
	
	==> coredns [beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c] <==
	[INFO] 10.244.0.4:43879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219235s
	[INFO] 10.244.1.2:54557 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005324153s
	[INFO] 10.244.1.2:59221 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00021778s
	[INFO] 10.244.1.2:56069 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0044481s
	[INFO] 10.244.1.2:50386 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00023413s
	[INFO] 10.244.2.2:46506 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103313s
	[INFO] 10.244.2.2:41909 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177677s
	[INFO] 10.244.0.4:57981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180642s
	[INFO] 10.244.0.4:42071 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100781s
	[INFO] 10.244.0.4:53066 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079995s
	[INFO] 10.244.0.4:54192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095317s
	[INFO] 10.244.1.2:42705 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147435s
	[INFO] 10.244.2.2:42448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014108s
	[INFO] 10.244.2.2:58687 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152745s
	[INFO] 10.244.2.2:59433 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159734s
	[INFO] 10.244.0.4:34822 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086009s
	[INFO] 10.244.0.4:46188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067594s
	[INFO] 10.244.0.4:33829 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130532s
	[INFO] 10.244.1.2:56575 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000557946s
	[INFO] 10.244.1.2:41726 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145733s
	[INFO] 10.244.2.2:56116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108892s
	[INFO] 10.244.2.2:58958 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075413s
	[INFO] 10.244.2.2:42001 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077659s
	[INFO] 10.244.0.4:53905 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091303s
	[INFO] 10.244.0.4:41906 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000098967s
	
	
	==> describe nodes <==
	Name:               ha-805293
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T19_59_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 19:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:06:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:03:01 +0000   Mon, 30 Sep 2024 20:00:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-805293
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 866f17ca2f8945bb8c8d7336ea64bab7
	  System UUID:                866f17ca-2f89-45bb-8c8d-7336ea64bab7
	  Boot ID:                    688ba3e5-bec7-403a-8a14-d517107abdf5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-r27jf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 coredns-7c65d6cfc9-x7zjp             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 coredns-7c65d6cfc9-z4bkv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 etcd-ha-805293                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m31s
	  kube-system                 kindnet-slhtm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m27s
	  kube-system                 kube-apiserver-ha-805293             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-ha-805293    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-proxy-6gnt4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-scheduler-ha-805293             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-vip-ha-805293                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m25s  kube-proxy       
	  Normal  Starting                 6m31s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m31s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m31s  kubelet          Node ha-805293 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s  kubelet          Node ha-805293 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s  kubelet          Node ha-805293 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m27s  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal  NodeReady                6m14s  kubelet          Node ha-805293 status is now: NodeReady
	  Normal  RegisteredNode           5m31s  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal  RegisteredNode           4m17s  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	
	
	Name:               ha-805293-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_00_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:00:48 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:03:41 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 30 Sep 2024 20:02:51 +0000   Mon, 30 Sep 2024 20:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-805293-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d0700264de549a1be3f1020308847ab
	  System UUID:                4d070026-4de5-49a1-be3f-1020308847ab
	  Boot ID:                    6a7fa1c9-5f0b-4080-a967-4e6a9eb2c122
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lshpm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 etcd-ha-805293-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m37s
	  kube-system                 kindnet-lfldt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m39s
	  kube-system                 kube-apiserver-ha-805293-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-controller-manager-ha-805293-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-proxy-vptrg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-scheduler-ha-805293-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-vip-ha-805293-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m35s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m39s (x8 over 5m40s)  kubelet          Node ha-805293-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m39s (x8 over 5m40s)  kubelet          Node ha-805293-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m39s (x7 over 5m40s)  kubelet          Node ha-805293-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  NodeNotReady             2m2s                   node-controller  Node ha-805293-m02 status is now: NodeNotReady
	
	
	Name:               ha-805293-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_02_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:02:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:06:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:03:02 +0000   Mon, 30 Sep 2024 20:02:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-805293-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d290a9661d284f5abbb0966111b1ff62
	  System UUID:                d290a966-1d28-4f5a-bbb0-966111b1ff62
	  Boot ID:                    4480564e-4012-421d-8e2a-ef45c5701e0e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nfncv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 etcd-ha-805293-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m24s
	  kube-system                 kindnet-qrhb8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m26s
	  kube-system                 kube-apiserver-ha-805293-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-controller-manager-ha-805293-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-b9cpp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-scheduler-ha-805293-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-vip-ha-805293-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet          Node ha-805293-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet          Node ha-805293-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m26s)  kubelet          Node ha-805293-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	
	
	Name:               ha-805293-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_03_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:03:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:06:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:03:37 +0000   Mon, 30 Sep 2024 20:03:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-805293-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66e464978dbd400d9e13327c67f50978
	  System UUID:                66e46497-8dbd-400d-9e13-327c67f50978
	  Boot ID:                    e58b57f2-9a1b-47d7-b35d-6de7e20bd5ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pk4z9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m20s
	  kube-system                 kube-proxy-7hn94    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m20s (x2 over 3m21s)  kubelet          Node ha-805293-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m20s (x2 over 3m21s)  kubelet          Node ha-805293-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m20s (x2 over 3m21s)  kubelet          Node ha-805293-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal  NodeReady                2m59s                  kubelet          Node ha-805293-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep30 19:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051498] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038050] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.756373] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.910183] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.882465] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.789974] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.062566] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063093] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.202518] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.124623] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.268552] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +3.977529] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.564932] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.062130] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.342874] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.088317] kauditd_printk_skb: 79 callbacks suppressed
	[Sep30 20:00] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.197664] kauditd_printk_skb: 38 callbacks suppressed
	[ +40.392588] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c] <==
	{"level":"warn","ts":"2024-09-30T20:06:27.436884Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.440624Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.442819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.445375Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.451105Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.457355Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.463457Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.467810Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.471654Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.481262Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.487907Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.493956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.497557Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.501490Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.508948Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.515046Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.521360Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.525490Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.528903Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.534441Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.536731Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.541567Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.542069Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.552720Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:06:27.560467Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:06:27 up 7 min,  0 users,  load average: 0.25, 0.25, 0.12
	Linux ha-805293 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa] <==
	I0930 20:05:53.361841       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:06:03.353152       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:06:03.353232       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:06:03.353604       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:06:03.353656       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:06:03.353788       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:06:03.353817       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:06:03.353915       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:06:03.353945       1 main.go:299] handling current node
	I0930 20:06:13.352401       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:06:13.352462       1 main.go:299] handling current node
	I0930 20:06:13.352487       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:06:13.352493       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:06:13.352648       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:06:13.352669       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:06:13.352727       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:06:13.352744       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:06:23.354476       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:06:23.354578       1 main.go:299] handling current node
	I0930 20:06:23.354620       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:06:23.354628       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:06:23.354796       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:06:23.354819       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:06:23.354869       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:06:23.354875       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78] <==
	I0930 19:59:55.232483       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0930 19:59:55.241927       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.3]
	I0930 19:59:55.242751       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 19:59:55.248161       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 19:59:56.585015       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 19:59:56.606454       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0930 19:59:56.717747       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 20:00:00.619178       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0930 20:00:00.866886       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0930 20:02:35.103260       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54756: use of closed network connection
	E0930 20:02:35.310204       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54774: use of closed network connection
	E0930 20:02:35.528451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54798: use of closed network connection
	E0930 20:02:35.718056       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54824: use of closed network connection
	E0930 20:02:35.905602       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54834: use of closed network connection
	E0930 20:02:36.095718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54846: use of closed network connection
	E0930 20:02:36.292842       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54870: use of closed network connection
	E0930 20:02:36.507445       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54880: use of closed network connection
	E0930 20:02:36.711017       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54890: use of closed network connection
	E0930 20:02:37.027891       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54906: use of closed network connection
	E0930 20:02:37.211934       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54928: use of closed network connection
	E0930 20:02:37.400557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54946: use of closed network connection
	E0930 20:02:37.592034       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54964: use of closed network connection
	E0930 20:02:37.769244       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54968: use of closed network connection
	E0930 20:02:37.945689       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54986: use of closed network connection
	W0930 20:04:05.250494       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.227 192.168.39.3]
	
	
	==> kube-controller-manager [0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963] <==
	I0930 20:03:07.394951       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-805293-m04" podCIDRs=["10.244.3.0/24"]
	I0930 20:03:07.395481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:07.396749       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:07.436135       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:07.684943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:08.073414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:10.185795       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-805293-m04"
	I0930 20:03:10.251142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:10.326069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:10.383451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:11.395780       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:11.488119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:17.639978       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:28.022240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:28.023330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-805293-m04"
	I0930 20:03:28.045054       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:30.206023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:03:37.957274       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:04:25.230773       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-805293-m04"
	I0930 20:04:25.230955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	I0930 20:04:25.255656       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	I0930 20:04:25.398159       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	I0930 20:04:25.408524       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.658854ms"
	I0930 20:04:25.408627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.436µs"
	I0930 20:04:30.476044       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	
	
	==> kube-proxy [cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:00:02.260002       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 20:00:02.292313       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.3"]
	E0930 20:00:02.293761       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:00:02.331058       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:00:02.331111       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:00:02.331136       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:00:02.334264       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:00:02.334706       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:00:02.334732       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:00:02.338075       1 config.go:199] "Starting service config controller"
	I0930 20:00:02.338115       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:00:02.338141       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:00:02.338146       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:00:02.340129       1 config.go:328] "Starting node config controller"
	I0930 20:00:02.340159       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:00:02.438958       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 20:00:02.439119       1 shared_informer.go:320] Caches are synced for service config
	I0930 20:00:02.440633       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463] <==
	W0930 19:59:54.471920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 19:59:54.472044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.522920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.524738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.525008       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 19:59:54.525097       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 19:59:54.570077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.570416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.573175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.573222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.611352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 19:59:54.611460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.614509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 19:59:54.614660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.659257       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 19:59:54.659351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 19:59:54.769876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.770087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 19:59:56.900381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 20:02:01.539050       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-h6pvg\": pod kube-proxy-h6pvg is already assigned to node \"ha-805293-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-h6pvg" node="ha-805293-m03"
	E0930 20:02:01.539424       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9860392c-eca6-4200-9b6e-f0a6f51b523b(kube-system/kube-proxy-h6pvg) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-h6pvg"
	E0930 20:02:01.539482       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-h6pvg\": pod kube-proxy-h6pvg is already assigned to node \"ha-805293-m03\"" pod="kube-system/kube-proxy-h6pvg"
	I0930 20:02:01.539558       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-h6pvg" node="ha-805293-m03"
	E0930 20:02:29.833811       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lshpm\": pod busybox-7dff88458-lshpm is already assigned to node \"ha-805293-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lshpm" node="ha-805293-m02"
	E0930 20:02:29.833910       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lshpm\": pod busybox-7dff88458-lshpm is already assigned to node \"ha-805293-m02\"" pod="default/busybox-7dff88458-lshpm"
	
	
	==> kubelet <==
	Sep 30 20:04:56 ha-805293 kubelet[1307]: E0930 20:04:56.831137    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726696830908263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:04:56 ha-805293 kubelet[1307]: E0930 20:04:56.831174    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726696830908263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:06 ha-805293 kubelet[1307]: E0930 20:05:06.833436    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726706832581949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:06 ha-805293 kubelet[1307]: E0930 20:05:06.834135    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726706832581949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:16 ha-805293 kubelet[1307]: E0930 20:05:16.840697    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726716835840638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:16 ha-805293 kubelet[1307]: E0930 20:05:16.841087    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726716835840638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:26 ha-805293 kubelet[1307]: E0930 20:05:26.843795    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726726842473695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:26 ha-805293 kubelet[1307]: E0930 20:05:26.843820    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726726842473695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:36 ha-805293 kubelet[1307]: E0930 20:05:36.846940    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726736846123824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:36 ha-805293 kubelet[1307]: E0930 20:05:36.847349    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726736846123824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:46 ha-805293 kubelet[1307]: E0930 20:05:46.849818    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726746849247125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:46 ha-805293 kubelet[1307]: E0930 20:05:46.850141    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726746849247125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:56 ha-805293 kubelet[1307]: E0930 20:05:56.740673    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 20:05:56 ha-805293 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 20:05:56 ha-805293 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 20:05:56 ha-805293 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 20:05:56 ha-805293 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 20:05:56 ha-805293 kubelet[1307]: E0930 20:05:56.852143    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726756851671468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:05:56 ha-805293 kubelet[1307]: E0930 20:05:56.852175    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726756851671468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:06 ha-805293 kubelet[1307]: E0930 20:06:06.854020    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726766853679089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:06 ha-805293 kubelet[1307]: E0930 20:06:06.854344    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726766853679089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:16 ha-805293 kubelet[1307]: E0930 20:06:16.857032    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726776856545104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:16 ha-805293 kubelet[1307]: E0930 20:06:16.857507    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726776856545104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:26 ha-805293 kubelet[1307]: E0930 20:06:26.859587    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726786859112579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:06:26 ha-805293 kubelet[1307]: E0930 20:06:26.859631    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727726786859112579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-805293 -n ha-805293
helpers_test.go:261: (dbg) Run:  kubectl --context ha-805293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (399.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-805293 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-805293 -v=7 --alsologtostderr
E0930 20:08:28.936380   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-805293 -v=7 --alsologtostderr: exit status 82 (2m1.799464008s)

                                                
                                                
-- stdout --
	* Stopping node "ha-805293-m04"  ...
	* Stopping node "ha-805293-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 20:06:28.615613   31502 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:06:28.615752   31502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:06:28.615765   31502 out.go:358] Setting ErrFile to fd 2...
	I0930 20:06:28.615772   31502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:06:28.615965   31502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:06:28.616188   31502 out.go:352] Setting JSON to false
	I0930 20:06:28.616309   31502 mustload.go:65] Loading cluster: ha-805293
	I0930 20:06:28.616728   31502 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:06:28.616828   31502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:06:28.617005   31502 mustload.go:65] Loading cluster: ha-805293
	I0930 20:06:28.617130   31502 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:06:28.617183   31502 stop.go:39] StopHost: ha-805293-m04
	I0930 20:06:28.617579   31502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:06:28.617620   31502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:06:28.632315   31502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36883
	I0930 20:06:28.632755   31502 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:06:28.633344   31502 main.go:141] libmachine: Using API Version  1
	I0930 20:06:28.633368   31502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:06:28.633773   31502 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:06:28.636546   31502 out.go:177] * Stopping node "ha-805293-m04"  ...
	I0930 20:06:28.638129   31502 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 20:06:28.638164   31502 main.go:141] libmachine: (ha-805293-m04) Calling .DriverName
	I0930 20:06:28.638457   31502 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 20:06:28.638493   31502 main.go:141] libmachine: (ha-805293-m04) Calling .GetSSHHostname
	I0930 20:06:28.641745   31502 main.go:141] libmachine: (ha-805293-m04) DBG | domain ha-805293-m04 has defined MAC address 52:54:00:fb:22:e7 in network mk-ha-805293
	I0930 20:06:28.642219   31502 main.go:141] libmachine: (ha-805293-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:22:e7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:02:52 +0000 UTC Type:0 Mac:52:54:00:fb:22:e7 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-805293-m04 Clientid:01:52:54:00:fb:22:e7}
	I0930 20:06:28.642282   31502 main.go:141] libmachine: (ha-805293-m04) DBG | domain ha-805293-m04 has defined IP address 192.168.39.92 and MAC address 52:54:00:fb:22:e7 in network mk-ha-805293
	I0930 20:06:28.642476   31502 main.go:141] libmachine: (ha-805293-m04) Calling .GetSSHPort
	I0930 20:06:28.642684   31502 main.go:141] libmachine: (ha-805293-m04) Calling .GetSSHKeyPath
	I0930 20:06:28.642831   31502 main.go:141] libmachine: (ha-805293-m04) Calling .GetSSHUsername
	I0930 20:06:28.642979   31502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m04/id_rsa Username:docker}
	I0930 20:06:28.727992   31502 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0930 20:06:28.782054   31502 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0930 20:06:28.835299   31502 main.go:141] libmachine: Stopping "ha-805293-m04"...
	I0930 20:06:28.835330   31502 main.go:141] libmachine: (ha-805293-m04) Calling .GetState
	I0930 20:06:28.836834   31502 main.go:141] libmachine: (ha-805293-m04) Calling .Stop
	I0930 20:06:28.839842   31502 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 0/120
	I0930 20:06:29.932819   31502 main.go:141] libmachine: (ha-805293-m04) Calling .GetState
	I0930 20:06:29.934199   31502 main.go:141] libmachine: Machine "ha-805293-m04" was stopped.
	I0930 20:06:29.934218   31502 stop.go:75] duration metric: took 1.296093004s to stop
	I0930 20:06:29.934247   31502 stop.go:39] StopHost: ha-805293-m03
	I0930 20:06:29.934645   31502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:06:29.934689   31502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:06:29.950077   31502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46845
	I0930 20:06:29.950662   31502 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:06:29.951192   31502 main.go:141] libmachine: Using API Version  1
	I0930 20:06:29.951229   31502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:06:29.951645   31502 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:06:29.954583   31502 out.go:177] * Stopping node "ha-805293-m03"  ...
	I0930 20:06:29.955884   31502 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 20:06:29.955916   31502 main.go:141] libmachine: (ha-805293-m03) Calling .DriverName
	I0930 20:06:29.956178   31502 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 20:06:29.956199   31502 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHHostname
	I0930 20:06:29.959575   31502 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:06:29.960028   31502 main.go:141] libmachine: (ha-805293-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:66:df", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:01:28 +0000 UTC Type:0 Mac:52:54:00:ce:66:df Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-805293-m03 Clientid:01:52:54:00:ce:66:df}
	I0930 20:06:29.960051   31502 main.go:141] libmachine: (ha-805293-m03) DBG | domain ha-805293-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:ce:66:df in network mk-ha-805293
	I0930 20:06:29.960209   31502 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHPort
	I0930 20:06:29.960386   31502 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHKeyPath
	I0930 20:06:29.960523   31502 main.go:141] libmachine: (ha-805293-m03) Calling .GetSSHUsername
	I0930 20:06:29.960668   31502 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m03/id_rsa Username:docker}
	I0930 20:06:30.053217   31502 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0930 20:06:30.106870   31502 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0930 20:06:30.162100   31502 main.go:141] libmachine: Stopping "ha-805293-m03"...
	I0930 20:06:30.162123   31502 main.go:141] libmachine: (ha-805293-m03) Calling .GetState
	I0930 20:06:30.163731   31502 main.go:141] libmachine: (ha-805293-m03) Calling .Stop
	I0930 20:06:30.167317   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 0/120
	I0930 20:06:31.168671   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 1/120
	I0930 20:06:32.169970   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 2/120
	I0930 20:06:33.171405   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 3/120
	I0930 20:06:34.172611   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 4/120
	I0930 20:06:35.174600   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 5/120
	I0930 20:06:36.175908   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 6/120
	I0930 20:06:37.178120   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 7/120
	I0930 20:06:38.179784   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 8/120
	I0930 20:06:39.181585   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 9/120
	I0930 20:06:40.183670   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 10/120
	I0930 20:06:41.185259   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 11/120
	I0930 20:06:42.186513   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 12/120
	I0930 20:06:43.188974   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 13/120
	I0930 20:06:44.190307   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 14/120
	I0930 20:06:45.192301   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 15/120
	I0930 20:06:46.194295   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 16/120
	I0930 20:06:47.195558   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 17/120
	I0930 20:06:48.197308   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 18/120
	I0930 20:06:49.198649   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 19/120
	I0930 20:06:50.200407   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 20/120
	I0930 20:06:51.202120   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 21/120
	I0930 20:06:52.203925   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 22/120
	I0930 20:06:53.205887   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 23/120
	I0930 20:06:54.207589   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 24/120
	I0930 20:06:55.209635   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 25/120
	I0930 20:06:56.211459   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 26/120
	I0930 20:06:57.213093   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 27/120
	I0930 20:06:58.214757   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 28/120
	I0930 20:06:59.217039   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 29/120
	I0930 20:07:00.219007   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 30/120
	I0930 20:07:01.220768   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 31/120
	I0930 20:07:02.222293   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 32/120
	I0930 20:07:03.223964   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 33/120
	I0930 20:07:04.225474   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 34/120
	I0930 20:07:05.227546   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 35/120
	I0930 20:07:06.229003   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 36/120
	I0930 20:07:07.230439   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 37/120
	I0930 20:07:08.231918   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 38/120
	I0930 20:07:09.233391   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 39/120
	I0930 20:07:10.235268   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 40/120
	I0930 20:07:11.236957   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 41/120
	I0930 20:07:12.238321   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 42/120
	I0930 20:07:13.239664   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 43/120
	I0930 20:07:14.240877   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 44/120
	I0930 20:07:15.242896   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 45/120
	I0930 20:07:16.244535   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 46/120
	I0930 20:07:17.245907   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 47/120
	I0930 20:07:18.247443   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 48/120
	I0930 20:07:19.248795   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 49/120
	I0930 20:07:20.250972   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 50/120
	I0930 20:07:21.252430   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 51/120
	I0930 20:07:22.253787   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 52/120
	I0930 20:07:23.255265   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 53/120
	I0930 20:07:24.256929   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 54/120
	I0930 20:07:25.258875   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 55/120
	I0930 20:07:26.260045   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 56/120
	I0930 20:07:27.261904   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 57/120
	I0930 20:07:28.263333   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 58/120
	I0930 20:07:29.264726   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 59/120
	I0930 20:07:30.266648   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 60/120
	I0930 20:07:31.268070   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 61/120
	I0930 20:07:32.269767   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 62/120
	I0930 20:07:33.271108   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 63/120
	I0930 20:07:34.272640   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 64/120
	I0930 20:07:35.274484   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 65/120
	I0930 20:07:36.275776   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 66/120
	I0930 20:07:37.277329   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 67/120
	I0930 20:07:38.278627   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 68/120
	I0930 20:07:39.279976   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 69/120
	I0930 20:07:40.281963   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 70/120
	I0930 20:07:41.283706   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 71/120
	I0930 20:07:42.285083   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 72/120
	I0930 20:07:43.286596   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 73/120
	I0930 20:07:44.288754   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 74/120
	I0930 20:07:45.290419   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 75/120
	I0930 20:07:46.292106   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 76/120
	I0930 20:07:47.293652   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 77/120
	I0930 20:07:48.295241   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 78/120
	I0930 20:07:49.296734   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 79/120
	I0930 20:07:50.298720   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 80/120
	I0930 20:07:51.300688   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 81/120
	I0930 20:07:52.302102   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 82/120
	I0930 20:07:53.303882   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 83/120
	I0930 20:07:54.305582   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 84/120
	I0930 20:07:55.307008   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 85/120
	I0930 20:07:56.308645   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 86/120
	I0930 20:07:57.310478   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 87/120
	I0930 20:07:58.312419   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 88/120
	I0930 20:07:59.313814   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 89/120
	I0930 20:08:00.315614   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 90/120
	I0930 20:08:01.317454   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 91/120
	I0930 20:08:02.318968   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 92/120
	I0930 20:08:03.320477   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 93/120
	I0930 20:08:04.321992   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 94/120
	I0930 20:08:05.324280   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 95/120
	I0930 20:08:06.325705   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 96/120
	I0930 20:08:07.327181   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 97/120
	I0930 20:08:08.328793   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 98/120
	I0930 20:08:09.330383   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 99/120
	I0930 20:08:10.331951   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 100/120
	I0930 20:08:11.333783   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 101/120
	I0930 20:08:12.335364   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 102/120
	I0930 20:08:13.336891   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 103/120
	I0930 20:08:14.338373   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 104/120
	I0930 20:08:15.340750   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 105/120
	I0930 20:08:16.342037   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 106/120
	I0930 20:08:17.343961   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 107/120
	I0930 20:08:18.345507   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 108/120
	I0930 20:08:19.347984   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 109/120
	I0930 20:08:20.349800   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 110/120
	I0930 20:08:21.351269   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 111/120
	I0930 20:08:22.352709   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 112/120
	I0930 20:08:23.354128   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 113/120
	I0930 20:08:24.355355   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 114/120
	I0930 20:08:25.357421   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 115/120
	I0930 20:08:26.358831   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 116/120
	I0930 20:08:27.360062   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 117/120
	I0930 20:08:28.361451   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 118/120
	I0930 20:08:29.362806   31502 main.go:141] libmachine: (ha-805293-m03) Waiting for machine to stop 119/120
	I0930 20:08:30.363744   31502 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0930 20:08:30.363802   31502 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0930 20:08:30.365994   31502 out.go:201] 
	W0930 20:08:30.367553   31502 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0930 20:08:30.367579   31502 out.go:270] * 
	* 
	W0930 20:08:30.370247   31502 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 20:08:30.371612   31502 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-805293 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-805293 --wait=true -v=7 --alsologtostderr
E0930 20:08:56.638025   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:10:55.311242   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:12:18.376251   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-805293 --wait=true -v=7 --alsologtostderr: (4m35.42324721s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-805293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-805293 -n ha-805293
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-805293 logs -n 25: (1.736900877s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m02:/home/docker/cp-test_ha-805293-m03_ha-805293-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m02 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04:/home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m04 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp testdata/cp-test.txt                                                | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3144947660/001/cp-test_ha-805293-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293:/home/docker/cp-test_ha-805293-m04_ha-805293.txt                       |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293 sudo cat                                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293.txt                                 |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m02:/home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m02 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03:/home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m03 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-805293 node stop m02 -v=7                                                     | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-805293 node start m02 -v=7                                                    | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-805293 -v=7                                                           | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-805293 -v=7                                                                | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-805293 --wait=true -v=7                                                    | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:08 UTC | 30 Sep 24 20:13 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-805293                                                                | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:13 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 20:08:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 20:08:30.418253   32024 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:08:30.418464   32024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:08:30.418472   32024 out.go:358] Setting ErrFile to fd 2...
	I0930 20:08:30.418476   32024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:08:30.418682   32024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:08:30.419207   32024 out.go:352] Setting JSON to false
	I0930 20:08:30.420095   32024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3053,"bootTime":1727723857,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 20:08:30.420187   32024 start.go:139] virtualization: kvm guest
	I0930 20:08:30.422949   32024 out.go:177] * [ha-805293] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 20:08:30.424884   32024 notify.go:220] Checking for updates...
	I0930 20:08:30.424943   32024 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 20:08:30.426796   32024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 20:08:30.428229   32024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:08:30.429602   32024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:08:30.430777   32024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 20:08:30.432201   32024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 20:08:30.434290   32024 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:08:30.434444   32024 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 20:08:30.435145   32024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:08:30.435205   32024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:08:30.450636   32024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33767
	I0930 20:08:30.451136   32024 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:08:30.451747   32024 main.go:141] libmachine: Using API Version  1
	I0930 20:08:30.451770   32024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:08:30.452071   32024 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:08:30.452248   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:08:30.492997   32024 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 20:08:30.494236   32024 start.go:297] selected driver: kvm2
	I0930 20:08:30.494249   32024 start.go:901] validating driver "kvm2" against &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:08:30.494410   32024 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 20:08:30.494805   32024 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:08:30.494892   32024 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 20:08:30.510418   32024 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 20:08:30.511136   32024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:08:30.511169   32024 cni.go:84] Creating CNI manager for ""
	I0930 20:08:30.511226   32024 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 20:08:30.511296   32024 start.go:340] cluster config:
	{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fa
lse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:08:30.511444   32024 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:08:30.513900   32024 out.go:177] * Starting "ha-805293" primary control-plane node in "ha-805293" cluster
	I0930 20:08:30.515215   32024 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:08:30.515255   32024 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 20:08:30.515262   32024 cache.go:56] Caching tarball of preloaded images
	I0930 20:08:30.515346   32024 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:08:30.515357   32024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:08:30.515497   32024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:08:30.515752   32024 start.go:360] acquireMachinesLock for ha-805293: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:08:30.515795   32024 start.go:364] duration metric: took 22.459µs to acquireMachinesLock for "ha-805293"
	I0930 20:08:30.515809   32024 start.go:96] Skipping create...Using existing machine configuration
	I0930 20:08:30.515820   32024 fix.go:54] fixHost starting: 
	I0930 20:08:30.516119   32024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:08:30.516149   32024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:08:30.531066   32024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34215
	I0930 20:08:30.531581   32024 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:08:30.532122   32024 main.go:141] libmachine: Using API Version  1
	I0930 20:08:30.532144   32024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:08:30.532477   32024 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:08:30.532668   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:08:30.532840   32024 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:08:30.534680   32024 fix.go:112] recreateIfNeeded on ha-805293: state=Running err=<nil>
	W0930 20:08:30.534697   32024 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 20:08:30.537614   32024 out.go:177] * Updating the running kvm2 "ha-805293" VM ...
	I0930 20:08:30.538929   32024 machine.go:93] provisionDockerMachine start ...
	I0930 20:08:30.538949   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:08:30.539155   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:08:30.541855   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.542282   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:30.542317   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.542475   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:08:30.542630   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:30.542771   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:30.542918   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:08:30.543058   32024 main.go:141] libmachine: Using SSH client type: native
	I0930 20:08:30.543244   32024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 20:08:30.543257   32024 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 20:08:30.656762   32024 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293
	
	I0930 20:08:30.656796   32024 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 20:08:30.657046   32024 buildroot.go:166] provisioning hostname "ha-805293"
	I0930 20:08:30.657072   32024 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 20:08:30.657248   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:08:30.660420   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.660872   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:30.660894   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.661136   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:08:30.661353   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:30.661545   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:30.661717   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:08:30.661925   32024 main.go:141] libmachine: Using SSH client type: native
	I0930 20:08:30.662108   32024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 20:08:30.662122   32024 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293 && echo "ha-805293" | sudo tee /etc/hostname
	I0930 20:08:30.791961   32024 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293
	
	I0930 20:08:30.791987   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:08:30.794822   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.795340   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:30.795377   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.795573   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:08:30.795765   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:30.795931   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:30.796149   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:08:30.796313   32024 main.go:141] libmachine: Using SSH client type: native
	I0930 20:08:30.796515   32024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 20:08:30.796531   32024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:08:30.912570   32024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:08:30.912600   32024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:08:30.912630   32024 buildroot.go:174] setting up certificates
	I0930 20:08:30.912639   32024 provision.go:84] configureAuth start
	I0930 20:08:30.912651   32024 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 20:08:30.912937   32024 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 20:08:30.915955   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.916436   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:30.916458   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.916664   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:08:30.919166   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.919734   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:30.919754   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.919967   32024 provision.go:143] copyHostCerts
	I0930 20:08:30.919995   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:08:30.920034   32024 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:08:30.920047   32024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:08:30.920115   32024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:08:30.920233   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:08:30.920255   32024 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:08:30.920262   32024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:08:30.920288   32024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:08:30.920339   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:08:30.920362   32024 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:08:30.920366   32024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:08:30.920388   32024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:08:30.920433   32024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293 san=[127.0.0.1 192.168.39.3 ha-805293 localhost minikube]
	I0930 20:08:31.201525   32024 provision.go:177] copyRemoteCerts
	I0930 20:08:31.201574   32024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:08:31.201595   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:08:31.204548   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:31.204916   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:31.204941   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:31.205106   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:08:31.205300   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:31.205461   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:08:31.205605   32024 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:08:31.294616   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 20:08:31.294694   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 20:08:31.318931   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 20:08:31.319001   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 20:08:31.344679   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 20:08:31.344753   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:08:31.371672   32024 provision.go:87] duration metric: took 459.016865ms to configureAuth
	I0930 20:08:31.371710   32024 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:08:31.371993   32024 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:08:31.372113   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:08:31.374783   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:31.375292   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:31.375320   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:31.375489   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:08:31.375703   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:31.375874   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:31.376022   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:08:31.376198   32024 main.go:141] libmachine: Using SSH client type: native
	I0930 20:08:31.376434   32024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 20:08:31.376457   32024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:10:02.281536   32024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:10:02.281573   32024 machine.go:96] duration metric: took 1m31.742628586s to provisionDockerMachine
	I0930 20:10:02.281588   32024 start.go:293] postStartSetup for "ha-805293" (driver="kvm2")
	I0930 20:10:02.281603   32024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:10:02.281625   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:10:02.282026   32024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:10:02.282056   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:10:02.285598   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.286082   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:02.286105   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.286329   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:10:02.286528   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:10:02.286672   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:10:02.286865   32024 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:10:02.375728   32024 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:10:02.379946   32024 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:10:02.379971   32024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:10:02.380045   32024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:10:02.380144   32024 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:10:02.380159   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 20:10:02.380362   32024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:10:02.390415   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:10:02.416705   32024 start.go:296] duration metric: took 135.10016ms for postStartSetup
	I0930 20:10:02.416758   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:10:02.417057   32024 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 20:10:02.417083   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:10:02.420067   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.420528   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:02.420553   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.420759   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:10:02.420977   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:10:02.421145   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:10:02.421360   32024 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	W0930 20:10:02.509993   32024 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0930 20:10:02.510023   32024 fix.go:56] duration metric: took 1m31.994203409s for fixHost
	I0930 20:10:02.510049   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:10:02.513222   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.513651   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:02.513677   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.513915   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:10:02.514100   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:10:02.514356   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:10:02.514500   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:10:02.514648   32024 main.go:141] libmachine: Using SSH client type: native
	I0930 20:10:02.514811   32024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 20:10:02.514821   32024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:10:02.628437   32024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727727002.596415726
	
	I0930 20:10:02.628462   32024 fix.go:216] guest clock: 1727727002.596415726
	I0930 20:10:02.628472   32024 fix.go:229] Guest: 2024-09-30 20:10:02.596415726 +0000 UTC Remote: 2024-09-30 20:10:02.510031868 +0000 UTC m=+92.127739919 (delta=86.383858ms)
	I0930 20:10:02.628537   32024 fix.go:200] guest clock delta is within tolerance: 86.383858ms
	I0930 20:10:02.628544   32024 start.go:83] releasing machines lock for "ha-805293", held for 1m32.112740535s
	I0930 20:10:02.628572   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:10:02.628881   32024 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 20:10:02.631570   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.632018   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:02.632041   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.632360   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:10:02.633056   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:10:02.633246   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:10:02.633344   32024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:10:02.633381   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:10:02.633509   32024 ssh_runner.go:195] Run: cat /version.json
	I0930 20:10:02.633536   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:10:02.636273   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.636367   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.636658   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:02.636685   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.636714   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:02.636733   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.636836   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:10:02.636981   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:10:02.636998   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:10:02.637115   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:10:02.637185   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:10:02.637240   32024 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:10:02.637287   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:10:02.637416   32024 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:10:02.756965   32024 ssh_runner.go:195] Run: systemctl --version
	I0930 20:10:02.763357   32024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:10:02.925524   32024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:10:02.933613   32024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:10:02.933678   32024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:10:02.943265   32024 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 20:10:02.943298   32024 start.go:495] detecting cgroup driver to use...
	I0930 20:10:02.943374   32024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:10:02.962359   32024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:10:02.978052   32024 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:10:02.978105   32024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:10:02.992673   32024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:10:03.006860   32024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:10:03.157020   32024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:10:03.312406   32024 docker.go:233] disabling docker service ...
	I0930 20:10:03.312477   32024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:10:03.330601   32024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:10:03.346087   32024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:10:03.513429   32024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:10:03.669065   32024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:10:03.684160   32024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:10:03.702667   32024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:10:03.702729   32024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.713687   32024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:10:03.713752   32024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.724817   32024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.735499   32024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.746372   32024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:10:03.757539   32024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.768261   32024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.779792   32024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.790851   32024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:10:03.801592   32024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:10:03.811688   32024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:10:03.958683   32024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:10:07.162069   32024 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.203347092s)
	I0930 20:10:07.162099   32024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:10:07.162144   32024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:10:07.172536   32024 start.go:563] Will wait 60s for crictl version
	I0930 20:10:07.172608   32024 ssh_runner.go:195] Run: which crictl
	I0930 20:10:07.176490   32024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:10:07.214938   32024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:10:07.215018   32024 ssh_runner.go:195] Run: crio --version
	I0930 20:10:07.247042   32024 ssh_runner.go:195] Run: crio --version
	I0930 20:10:07.277714   32024 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:10:07.279546   32024 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 20:10:07.282463   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:07.282896   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:07.282925   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:07.283156   32024 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:10:07.288124   32024 kubeadm.go:883] updating cluster {Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 20:10:07.288294   32024 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:10:07.288367   32024 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:10:07.331573   32024 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:10:07.331599   32024 crio.go:433] Images already preloaded, skipping extraction
	I0930 20:10:07.331650   32024 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:10:07.366799   32024 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:10:07.366825   32024 cache_images.go:84] Images are preloaded, skipping loading
	I0930 20:10:07.366836   32024 kubeadm.go:934] updating node { 192.168.39.3 8443 v1.31.1 crio true true} ...
	I0930 20:10:07.366940   32024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:10:07.367022   32024 ssh_runner.go:195] Run: crio config
	I0930 20:10:07.415231   32024 cni.go:84] Creating CNI manager for ""
	I0930 20:10:07.415255   32024 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 20:10:07.415264   32024 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 20:10:07.415293   32024 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-805293 NodeName:ha-805293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 20:10:07.415481   32024 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-805293"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 20:10:07.415504   32024 kube-vip.go:115] generating kube-vip config ...
	I0930 20:10:07.415560   32024 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 20:10:07.427100   32024 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 20:10:07.427231   32024 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 20:10:07.427299   32024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:10:07.437361   32024 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 20:10:07.437422   32024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 20:10:07.447137   32024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0930 20:10:07.463643   32024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:10:07.480909   32024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0930 20:10:07.497151   32024 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 20:10:07.513129   32024 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 20:10:07.517543   32024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:10:07.663227   32024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:10:07.677878   32024 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.3
	I0930 20:10:07.677900   32024 certs.go:194] generating shared ca certs ...
	I0930 20:10:07.677919   32024 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:10:07.678091   32024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:10:07.678147   32024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:10:07.678156   32024 certs.go:256] generating profile certs ...
	I0930 20:10:07.678262   32024 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 20:10:07.678300   32024 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1490b8e9
	I0930 20:10:07.678329   32024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1490b8e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.220 192.168.39.227 192.168.39.254]
	I0930 20:10:07.791960   32024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1490b8e9 ...
	I0930 20:10:07.791995   32024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1490b8e9: {Name:mk874f676f601a9161261dbafeec607626035cbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:10:07.792155   32024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1490b8e9 ...
	I0930 20:10:07.792166   32024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1490b8e9: {Name:mk6f1737ee8f44359c97ed002ae5fcd3f62cda77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:10:07.792233   32024 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1490b8e9 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 20:10:07.792392   32024 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1490b8e9 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 20:10:07.792518   32024 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 20:10:07.792532   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 20:10:07.792551   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 20:10:07.792570   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 20:10:07.792583   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 20:10:07.792596   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 20:10:07.792608   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 20:10:07.792620   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 20:10:07.792632   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 20:10:07.792677   32024 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:10:07.792704   32024 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:10:07.792710   32024 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:10:07.792733   32024 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:10:07.792754   32024 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:10:07.792777   32024 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:10:07.792815   32024 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:10:07.792840   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 20:10:07.792854   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 20:10:07.792866   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:10:07.793423   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:10:07.818870   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:10:07.843434   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:10:07.868173   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:10:07.891992   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 20:10:07.916550   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 20:10:07.942281   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:10:07.967426   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 20:10:07.991808   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:10:08.016250   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:10:08.040767   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:10:08.065245   32024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 20:10:08.081730   32024 ssh_runner.go:195] Run: openssl version
	I0930 20:10:08.087602   32024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:10:08.098310   32024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:10:08.102714   32024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:10:08.102774   32024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:10:08.108094   32024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:10:08.117034   32024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:10:08.127515   32024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:10:08.131784   32024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:10:08.131843   32024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:10:08.137306   32024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:10:08.147812   32024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:10:08.158599   32024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:10:08.163420   32024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:10:08.163486   32024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:10:08.169078   32024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:10:08.179749   32024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:10:08.184378   32024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 20:10:08.190191   32024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 20:10:08.195744   32024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 20:10:08.201181   32024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 20:10:08.206851   32024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 20:10:08.212087   32024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 20:10:08.217419   32024 kubeadm.go:392] StartCluster: {Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:10:08.217521   32024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 20:10:08.217563   32024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 20:10:08.261130   32024 cri.go:89] found id: "a09228d49f4ad068623e6315524f56bf1711bcc27f73dc0878d7dc879947bb89"
	I0930 20:10:08.261163   32024 cri.go:89] found id: "587b1ad4b8191a4014e26828a32606215b3377cd45b366d4de0ed03ffb0b7837"
	I0930 20:10:08.261168   32024 cri.go:89] found id: "2d358322f532c68b803989835b3e2521f53c29d7958667ceeeaaca809b61ce74"
	I0930 20:10:08.261171   32024 cri.go:89] found id: "bcfa6f22eace82338bca9d52207525aa6bff9130f092366621e59b71f8225240"
	I0930 20:10:08.261174   32024 cri.go:89] found id: "8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b"
	I0930 20:10:08.261178   32024 cri.go:89] found id: "beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c"
	I0930 20:10:08.261180   32024 cri.go:89] found id: "e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa"
	I0930 20:10:08.261183   32024 cri.go:89] found id: "cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088"
	I0930 20:10:08.261185   32024 cri.go:89] found id: "5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe"
	I0930 20:10:08.261191   32024 cri.go:89] found id: "0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963"
	I0930 20:10:08.261195   32024 cri.go:89] found id: "9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463"
	I0930 20:10:08.261198   32024 cri.go:89] found id: "219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c"
	I0930 20:10:08.261200   32024 cri.go:89] found id: "994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78"
	I0930 20:10:08.261203   32024 cri.go:89] found id: ""
	I0930 20:10:08.261251   32024 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.600148784Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e938ea2-44f6-4ddb-a016-6642d8458d1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.600695620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:799d0bb0c993d0ffde3eefbcc05bcb611d96d352cb0ea83e7022f8fbd550dd95,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727727185719881527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985f5a2a7c076eb4cf77a8b507f759819a444134f93b1df5e5932da65c1270e,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727727057730351169,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a945cf678b444c95ced3c0655fedd7e24a271a0269cf64af94ee977600d79ad,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727727056734933159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc285523fdce19c14d398147b8793713be6f6d52049dd8b29d2844a668b82645,PodSandboxId:0f134ad7b95b1f2e96670573b8bb737db2ee057af15552e2fb9e2d5f4e25e29f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727727047994332438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744a1c20ed6c3fe15442d117e472f677d759a07b2075fef70341de56d798d14b,PodSandboxId:bb6065f83dadf08926cabdd5d9999f932c0d8a6d5782ca9efd3b6f505284a827,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727727024835063908,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0b24a252ad7163810aa1bbddc4bc981,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a30cbd3eb0f4ef05c7391f3280d861cd10d0fe8ba20335ae73fcbe214e80a9e,PodSandboxId:ed86ec584c49134727b6ee9b95d6ebf6f92cc75139119cf0ea2b4be83d6df838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727727014815519285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:380bd6e347263a3f2a049efae6b9b292d5c95a687b571ed3542ef7673141a92f,PodSandboxId:4709331fb79f41392654d87d0cbba6850b4edafe1c7c72a0b9cffa363d1c2fb3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727727014834391279,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:6fed1262e64394560fbc057ea4f9f851d03675b41610f8834ec91e719fc78857,PodSandboxId:907a40f61fd35c956014f9d913d24ffce1e777898650629dce7c4a64c1a75eed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727014680588124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f45850bfc7eb9db0b4c4a227b97d9fe0d1f99e266d77e9b66fc2797453326c,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727727014612742689,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b7eae086adfa13129e0ee64055dbf5ecef59b6cbb57e8c3f82ec0b37998f6d8,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727727014583572877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727727014507776681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f5919856e7020e2eb736700dcc60faf49bb3549a20d86cecc06833256227d,PodSandboxId:8754efd58ac6fd709d308dbfc7dd062dbaebb39928b4645d4af510e8e3cfbb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727727014457184858,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9458794f1a510009238ae84c24f002abcd8dd8cfe472470a8cefb49c2d1d1ff,PodSandboxId:d6d05abaafe65ae0bf04bf51aef7e61d0aabc4fbc70b020c0d74daa5f0100475,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727727014414000533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77443ff4394cea6f0d035877e1e1513cab12a1648c096fad857654ededda1936,PodSandboxId:fd5726427f3e1d9295403eb0289cc84ce04bd43f38db9bd9ff5c93937cb4bad9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727011060782331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727726553788930169,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414317132948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414250226322,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727726402286751491,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727726402007394795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727726390313458555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727726390230834509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e938ea2-44f6-4ddb-a016-6642d8458d1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.601686753Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=b60c1f80-92b5-43a3-9962-a36edeca328a name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.602101935Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0f134ad7b95b1f2e96670573b8bb737db2ee057af15552e2fb9e2d5f4e25e29f,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-r27jf,Uid:8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727727047864193334,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:02:29.829247076Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bb6065f83dadf08926cabdd5d9999f932c0d8a6d5782ca9efd3b6f505284a827,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-805293,Uid:f0b24a252ad7163810aa1bbddc4bc981,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1727727024747165922,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0b24a252ad7163810aa1bbddc4bc981,},Annotations:map[string]string{kubernetes.io/config.hash: f0b24a252ad7163810aa1bbddc4bc981,kubernetes.io/config.seen: 2024-09-30T20:10:07.482457112Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-805293,Uid:0e187d2ff3fb002e09fae92363c4994b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727727014193929023,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.adverti
se-address.endpoint: 192.168.39.3:8443,kubernetes.io/config.hash: 0e187d2ff3fb002e09fae92363c4994b,kubernetes.io/config.seen: 2024-09-30T19:59:56.668271563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ed86ec584c49134727b6ee9b95d6ebf6f92cc75139119cf0ea2b4be83d6df838,Metadata:&PodSandboxMetadata{Name:kube-proxy-6gnt4,Uid:a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727727014170019786,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:00:00.921254096Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:907a40f61fd35c956014f9d913d24ffce1e777898650629dce7c4a64c1a75eed,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6c
fc9-x7zjp,Uid:b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727727014166965989,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:00:13.706232430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-805293,Uid:91de2f71b33d8668e0d24248c5ba505a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727727014157479847,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de
2f71b33d8668e0d24248c5ba505a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 91de2f71b33d8668e0d24248c5ba505a,kubernetes.io/config.seen: 2024-09-30T19:59:56.668273090Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4709331fb79f41392654d87d0cbba6850b4edafe1c7c72a0b9cffa363d1c2fb3,Metadata:&PodSandboxMetadata{Name:kindnet-slhtm,Uid:a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727727014119949886,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:00:00.924871676Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8754efd58ac6fd709d308dbfc7dd062dbaebb39928b4645d4af510e8e
3cfbb07,Metadata:&PodSandboxMetadata{Name:etcd-ha-805293,Uid:0dc042ef6adb6bb0f327bb59cec9a57d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727727014097389837,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.3:2379,kubernetes.io/config.hash: 0dc042ef6adb6bb0f327bb59cec9a57d,kubernetes.io/config.seen: 2024-09-30T19:59:56.668268315Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1912fdf8-d789-4ba9-99ff-c87ccbf330ec,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727727014095891148,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integrat
ion-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-30T20:00:13.713726371Z,kubernetes.io
/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d6d05abaafe65ae0bf04bf51aef7e61d0aabc4fbc70b020c0d74daa5f0100475,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-805293,Uid:f33fa137f85dfeea3a67cdcccdd92a29,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727727014047194333,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f33fa137f85dfeea3a67cdcccdd92a29,kubernetes.io/config.seen: 2024-09-30T19:59:56.668274279Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fd5726427f3e1d9295403eb0289cc84ce04bd43f38db9bd9ff5c93937cb4bad9,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-z4bkv,Uid:c6ba0288-138e-4690-a68d-6d6378e28deb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727727010916440724,Label
s:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:00:13.716538200Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-r27jf,Uid:8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727726550148683985,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:02:29.829247076Z,kubernetes.io/config.source: ap
i,},RuntimeHandler:,},&PodSandbox{Id:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-z4bkv,Uid:c6ba0288-138e-4690-a68d-6d6378e28deb,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727726414032844879,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:00:13.716538200Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-x7zjp,Uid:b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727726414018743460,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubern
etes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:00:13.706232430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&PodSandboxMetadata{Name:kube-proxy-6gnt4,Uid:a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727726401875351517,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:00:00.921254096Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodS
andbox{Id:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&PodSandboxMetadata{Name:kindnet-slhtm,Uid:a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727726401840963162,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:00:00.924871676Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&PodSandboxMetadata{Name:etcd-ha-805293,Uid:0dc042ef6adb6bb0f327bb59cec9a57d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727726390010803949,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD
,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.3:2379,kubernetes.io/config.hash: 0dc042ef6adb6bb0f327bb59cec9a57d,kubernetes.io/config.seen: 2024-09-30T19:59:49.539868097Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-805293,Uid:f33fa137f85dfeea3a67cdcccdd92a29,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727726389993391808,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f33fa137f85dfe
ea3a67cdcccdd92a29,kubernetes.io/config.seen: 2024-09-30T19:59:49.539875185Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b60c1f80-92b5-43a3-9962-a36edeca328a name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.607917204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46f35d94-4e10-4343-8199-f2adb7f3a426 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.608019084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46f35d94-4e10-4343-8199-f2adb7f3a426 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.608589113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:799d0bb0c993d0ffde3eefbcc05bcb611d96d352cb0ea83e7022f8fbd550dd95,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727727185719881527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985f5a2a7c076eb4cf77a8b507f759819a444134f93b1df5e5932da65c1270e,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727727057730351169,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a945cf678b444c95ced3c0655fedd7e24a271a0269cf64af94ee977600d79ad,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727727056734933159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc285523fdce19c14d398147b8793713be6f6d52049dd8b29d2844a668b82645,PodSandboxId:0f134ad7b95b1f2e96670573b8bb737db2ee057af15552e2fb9e2d5f4e25e29f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727727047994332438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744a1c20ed6c3fe15442d117e472f677d759a07b2075fef70341de56d798d14b,PodSandboxId:bb6065f83dadf08926cabdd5d9999f932c0d8a6d5782ca9efd3b6f505284a827,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727727024835063908,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0b24a252ad7163810aa1bbddc4bc981,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a30cbd3eb0f4ef05c7391f3280d861cd10d0fe8ba20335ae73fcbe214e80a9e,PodSandboxId:ed86ec584c49134727b6ee9b95d6ebf6f92cc75139119cf0ea2b4be83d6df838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727727014815519285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:380bd6e347263a3f2a049efae6b9b292d5c95a687b571ed3542ef7673141a92f,PodSandboxId:4709331fb79f41392654d87d0cbba6850b4edafe1c7c72a0b9cffa363d1c2fb3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727727014834391279,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:6fed1262e64394560fbc057ea4f9f851d03675b41610f8834ec91e719fc78857,PodSandboxId:907a40f61fd35c956014f9d913d24ffce1e777898650629dce7c4a64c1a75eed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727014680588124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f45850bfc7eb9db0b4c4a227b97d9fe0d1f99e266d77e9b66fc2797453326c,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727727014612742689,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b7eae086adfa13129e0ee64055dbf5ecef59b6cbb57e8c3f82ec0b37998f6d8,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727727014583572877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727727014507776681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f5919856e7020e2eb736700dcc60faf49bb3549a20d86cecc06833256227d,PodSandboxId:8754efd58ac6fd709d308dbfc7dd062dbaebb39928b4645d4af510e8e3cfbb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727727014457184858,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9458794f1a510009238ae84c24f002abcd8dd8cfe472470a8cefb49c2d1d1ff,PodSandboxId:d6d05abaafe65ae0bf04bf51aef7e61d0aabc4fbc70b020c0d74daa5f0100475,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727727014414000533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77443ff4394cea6f0d035877e1e1513cab12a1648c096fad857654ededda1936,PodSandboxId:fd5726427f3e1d9295403eb0289cc84ce04bd43f38db9bd9ff5c93937cb4bad9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727011060782331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727726553788930169,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414317132948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414250226322,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727726402286751491,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727726402007394795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727726390313458555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727726390230834509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46f35d94-4e10-4343-8199-f2adb7f3a426 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.611078130Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},},}" file="otel-collector/interceptors.go:62" id=fc718548-d2d3-4928-9768-1c92fd0e38be name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.611404699Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1912fdf8-d789-4ba9-99ff-c87ccbf330ec,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727727014095891148,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\
"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-30T20:00:13.713726371Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fc718548-d2d3-4928-9768-1c92fd0e38be name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.613782214Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Verbose:false,}" file="otel-collector/interceptors.go:62" id=76f2c23d-c58c-4b18-b3de-0efc47fc05f9 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.613904418Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1912fdf8-d789-4ba9-99ff-c87ccbf330ec,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727727014095891148,Network:&PodSandboxNetworkStatus{Ip:,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:NODE,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\
",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-30T20:00:13.713726371Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=76f2c23d-c58c-4b18-b3de-0efc47fc05f9 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.614395058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},},}" file="otel-collector/interceptors.go:62" id=04ef51f5-556c-4182-bef2-21aea1912736 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.614485635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04ef51f5-556c-4182-bef2-21aea1912736 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.614608728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:799d0bb0c993d0ffde3eefbcc05bcb611d96d352cb0ea83e7022f8fbd550dd95,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727727185719881527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727727014507776681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04ef51f5-556c-4182-bef2-21aea1912736 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.615010024Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:799d0bb0c993d0ffde3eefbcc05bcb611d96d352cb0ea83e7022f8fbd550dd95,Verbose:false,}" file="otel-collector/interceptors.go:62" id=4fbcaa3a-da0b-484d-ac95-4678bbcebf6b name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.615127519Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:799d0bb0c993d0ffde3eefbcc05bcb611d96d352cb0ea83e7022f8fbd550dd95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},State:CONTAINER_RUNNING,CreatedAt:1727727185767425726,StartedAt:1727727185811798647,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/1912fdf8-d789-4ba9-99ff-c87ccbf330ec/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/1912fdf8-d789-4ba9-99ff-c87ccbf330ec/containers/storage-provisioner/ab574871,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/1912fdf8-d789-4ba9-99ff-c87ccbf330ec/volumes/kubernetes.io~projected/kube-api-access-bzwqd,Readonly:true,SelinuxRelabel:false,Propa
gation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_1912fdf8-d789-4ba9-99ff-c87ccbf330ec/storage-provisioner/6.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=4fbcaa3a-da0b-484d-ac95-4678bbcebf6b name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.616636576Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc,Verbose:false,}" file="otel-collector/interceptors.go:62" id=e02111e0-c710-4751-b75a-87efac8e75ab name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.616923358Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},State:CONTAINER_EXITED,CreatedAt:1727727014695398892,StartedAt:1727727014843091467,FinishedAt:1727727015053900216,ExitCode:1,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:Error,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/1912fdf8-d789-4ba9-99ff-c87ccbf330ec/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/1912fdf8-d789-4ba9-99ff-c87ccbf330ec/containers/storage-provisioner/2c7d3fdf,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/1912fdf8-d789-4ba9-99ff-c87ccbf330ec/volumes/kubernetes.io~projected/kube-api-access-bzwqd,Readonly:true,Seli
nuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_1912fdf8-d789-4ba9-99ff-c87ccbf330ec/storage-provisioner/5.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e02111e0-c710-4751-b75a-87efac8e75ab name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.662485883Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8394aef1-18b1-47ca-ac55-575815122d70 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.662564860Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8394aef1-18b1-47ca-ac55-575815122d70 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.663955328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afffd52b-aca4-48f8-bbf0-0657d030bfeb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.664449988Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727186664424948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afffd52b-aca4-48f8-bbf0-0657d030bfeb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.665134011Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1dc2b93-7de8-4f02-8923-4dc411fe77c8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.665206962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1dc2b93-7de8-4f02-8923-4dc411fe77c8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:13:06 ha-805293 crio[3907]: time="2024-09-30 20:13:06.665676645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:799d0bb0c993d0ffde3eefbcc05bcb611d96d352cb0ea83e7022f8fbd550dd95,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727727185719881527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985f5a2a7c076eb4cf77a8b507f759819a444134f93b1df5e5932da65c1270e,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727727057730351169,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a945cf678b444c95ced3c0655fedd7e24a271a0269cf64af94ee977600d79ad,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727727056734933159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc285523fdce19c14d398147b8793713be6f6d52049dd8b29d2844a668b82645,PodSandboxId:0f134ad7b95b1f2e96670573b8bb737db2ee057af15552e2fb9e2d5f4e25e29f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727727047994332438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744a1c20ed6c3fe15442d117e472f677d759a07b2075fef70341de56d798d14b,PodSandboxId:bb6065f83dadf08926cabdd5d9999f932c0d8a6d5782ca9efd3b6f505284a827,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727727024835063908,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0b24a252ad7163810aa1bbddc4bc981,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a30cbd3eb0f4ef05c7391f3280d861cd10d0fe8ba20335ae73fcbe214e80a9e,PodSandboxId:ed86ec584c49134727b6ee9b95d6ebf6f92cc75139119cf0ea2b4be83d6df838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727727014815519285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:380bd6e347263a3f2a049efae6b9b292d5c95a687b571ed3542ef7673141a92f,PodSandboxId:4709331fb79f41392654d87d0cbba6850b4edafe1c7c72a0b9cffa363d1c2fb3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727727014834391279,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:6fed1262e64394560fbc057ea4f9f851d03675b41610f8834ec91e719fc78857,PodSandboxId:907a40f61fd35c956014f9d913d24ffce1e777898650629dce7c4a64c1a75eed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727014680588124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f45850bfc7eb9db0b4c4a227b97d9fe0d1f99e266d77e9b66fc2797453326c,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727727014612742689,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b7eae086adfa13129e0ee64055dbf5ecef59b6cbb57e8c3f82ec0b37998f6d8,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727727014583572877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727727014507776681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f5919856e7020e2eb736700dcc60faf49bb3549a20d86cecc06833256227d,PodSandboxId:8754efd58ac6fd709d308dbfc7dd062dbaebb39928b4645d4af510e8e3cfbb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727727014457184858,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9458794f1a510009238ae84c24f002abcd8dd8cfe472470a8cefb49c2d1d1ff,PodSandboxId:d6d05abaafe65ae0bf04bf51aef7e61d0aabc4fbc70b020c0d74daa5f0100475,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727727014414000533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77443ff4394cea6f0d035877e1e1513cab12a1648c096fad857654ededda1936,PodSandboxId:fd5726427f3e1d9295403eb0289cc84ce04bd43f38db9bd9ff5c93937cb4bad9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727011060782331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727726553788930169,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414317132948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414250226322,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727726402286751491,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727726402007394795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727726390313458555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727726390230834509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1dc2b93-7de8-4f02-8923-4dc411fe77c8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	799d0bb0c993d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      1 second ago        Running             storage-provisioner       6                   b6eca5d34d418       storage-provisioner
	a985f5a2a7c07       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago       Running             kube-controller-manager   2                   8da7e73e0b2fd       kube-controller-manager-ha-805293
	9a945cf678b44       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago       Running             kube-apiserver            3                   0351a72258f94       kube-apiserver-ha-805293
	dc285523fdce1       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago       Running             busybox                   1                   0f134ad7b95b1       busybox-7dff88458-r27jf
	744a1c20ed6c3       18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460                                      2 minutes ago       Running             kube-vip                  0                   bb6065f83dadf       kube-vip-ha-805293
	380bd6e347263       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago       Running             kindnet-cni               1                   4709331fb79f4       kindnet-slhtm
	5a30cbd3eb0f4       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago       Running             kube-proxy                1                   ed86ec584c491       kube-proxy-6gnt4
	6fed1262e6439       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago       Running             coredns                   1                   907a40f61fd35       coredns-7c65d6cfc9-x7zjp
	e6f45850bfc7e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago       Exited              kube-apiserver            2                   0351a72258f94       kube-apiserver-ha-805293
	5b7eae086adfa       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago       Exited              kube-controller-manager   1                   8da7e73e0b2fd       kube-controller-manager-ha-805293
	2953f6dc095a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Exited              storage-provisioner       5                   b6eca5d34d418       storage-provisioner
	3b4f5919856e7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago       Running             etcd                      1                   8754efd58ac6f       etcd-ha-805293
	d9458794f1a51       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago       Running             kube-scheduler            1                   d6d05abaafe65       kube-scheduler-ha-805293
	77443ff4394ce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago       Running             coredns                   1                   fd5726427f3e1       coredns-7c65d6cfc9-z4bkv
	10ee59c77c769       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago      Exited              busybox                   0                   a8d4349f6e0b0       busybox-7dff88458-r27jf
	8c540e4668f99       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago      Exited              coredns                   0                   f95d30afc0491       coredns-7c65d6cfc9-x7zjp
	beba42a2bf035       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago      Exited              coredns                   0                   626fdaeb1b142       coredns-7c65d6cfc9-z4bkv
	e28b6781ed449       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago      Exited              kindnet-cni               0                   36a3293339cae       kindnet-slhtm
	cd73b6dc43348       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Exited              kube-proxy                0                   27a0913ae182a       kube-proxy-6gnt4
	9b8d5baa6998a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Exited              kube-scheduler            0                   73733467afdd9       kube-scheduler-ha-805293
	219dff1c43cd4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Exited              etcd                      0                   bff718c807eb7       etcd-ha-805293
	
	
	==> coredns [6fed1262e64394560fbc057ea4f9f851d03675b41610f8834ec91e719fc78857] <==
	[INFO] plugin/kubernetes: Trace[1977302319]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 20:10:19.661) (total time: 10001ms):
	Trace[1977302319]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:10:29.662)
	Trace[1977302319]: [10.001378695s] [10.001378695s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2032279084]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 20:10:19.885) (total time: 10001ms):
	Trace[2032279084]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:10:29.887)
	Trace[2032279084]: [10.001633399s] [10.001633399s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58874->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58874->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [77443ff4394cea6f0d035877e1e1513cab12a1648c096fad857654ededda1936] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[941344975]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 20:10:18.972) (total time: 10001ms):
	Trace[941344975]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:10:28.973)
	Trace[941344975]: [10.001074456s] [10.001074456s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55510->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55510->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b] <==
	[INFO] 10.244.1.2:50368 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261008s
	[INFO] 10.244.1.2:34858 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000270623s
	[INFO] 10.244.1.2:59975 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192447s
	[INFO] 10.244.2.2:37486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233576s
	[INFO] 10.244.2.2:40647 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002177996s
	[INFO] 10.244.2.2:39989 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196915s
	[INFO] 10.244.2.2:42105 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001612348s
	[INFO] 10.244.2.2:42498 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180331s
	[INFO] 10.244.2.2:34873 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000262642s
	[INFO] 10.244.0.4:55282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002337707s
	[INFO] 10.244.0.4:52721 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082276s
	[INFO] 10.244.0.4:33773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001975703s
	[INFO] 10.244.0.4:44087 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095899s
	[INFO] 10.244.1.2:44456 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189431s
	[INFO] 10.244.1.2:52532 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112979s
	[INFO] 10.244.1.2:39707 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095712s
	[INFO] 10.244.2.2:42900 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101241s
	[INFO] 10.244.0.4:56608 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134276s
	[INFO] 10.244.1.2:35939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00031266s
	[INFO] 10.244.1.2:48131 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196792s
	[INFO] 10.244.2.2:40732 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154649s
	[INFO] 10.244.0.4:51180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000206094s
	[INFO] 10.244.0.4:36921 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000118718s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c] <==
	[INFO] 10.244.1.2:59221 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00021778s
	[INFO] 10.244.1.2:56069 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0044481s
	[INFO] 10.244.1.2:50386 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00023413s
	[INFO] 10.244.2.2:46506 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103313s
	[INFO] 10.244.2.2:41909 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177677s
	[INFO] 10.244.0.4:57981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180642s
	[INFO] 10.244.0.4:42071 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100781s
	[INFO] 10.244.0.4:53066 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079995s
	[INFO] 10.244.0.4:54192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095317s
	[INFO] 10.244.1.2:42705 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147435s
	[INFO] 10.244.2.2:42448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014108s
	[INFO] 10.244.2.2:58687 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152745s
	[INFO] 10.244.2.2:59433 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159734s
	[INFO] 10.244.0.4:34822 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086009s
	[INFO] 10.244.0.4:46188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067594s
	[INFO] 10.244.0.4:33829 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130532s
	[INFO] 10.244.1.2:56575 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000557946s
	[INFO] 10.244.1.2:41726 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145733s
	[INFO] 10.244.2.2:56116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108892s
	[INFO] 10.244.2.2:58958 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075413s
	[INFO] 10.244.2.2:42001 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077659s
	[INFO] 10.244.0.4:53905 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091303s
	[INFO] 10.244.0.4:41906 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000098967s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-805293
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T19_59_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 19:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:12:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:10:59 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:10:59 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:10:59 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:10:59 +0000   Mon, 30 Sep 2024 20:00:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-805293
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 866f17ca2f8945bb8c8d7336ea64bab7
	  System UUID:                866f17ca-2f89-45bb-8c8d-7336ea64bab7
	  Boot ID:                    688ba3e5-bec7-403a-8a14-d517107abdf5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-r27jf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-x7zjp             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-z4bkv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-805293                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-slhtm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-805293             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-805293    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-6gnt4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-805293             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-805293                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m9s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-805293 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-805293 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-805293 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-805293 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Warning  ContainerGCFailed        3m11s (x2 over 4m11s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m57s (x3 over 3m46s)  kubelet          Node ha-805293 status is now: NodeNotReady
	  Normal   RegisteredNode           2m16s                  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal   RegisteredNode           2m5s                   node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal   RegisteredNode           45s                    node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	
	
	Name:               ha-805293-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_00_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:00:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:13:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:11:39 +0000   Mon, 30 Sep 2024 20:10:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:11:39 +0000   Mon, 30 Sep 2024 20:10:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:11:39 +0000   Mon, 30 Sep 2024 20:10:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:11:39 +0000   Mon, 30 Sep 2024 20:10:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-805293-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d0700264de549a1be3f1020308847ab
	  System UUID:                4d070026-4de5-49a1-be3f-1020308847ab
	  Boot ID:                    c2afb042-4941-4000-8a03-eb4543e77620
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lshpm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-805293-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-lfldt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-805293-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-805293-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-vptrg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-805293-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-805293-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 103s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-805293-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-805293-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-805293-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  NodeNotReady             8m42s                  node-controller  Node ha-805293-m02 status is now: NodeNotReady
	  Normal  Starting                 2m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node ha-805293-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node ha-805293-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m37s (x7 over 2m37s)  kubelet          Node ha-805293-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m16s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           2m5s                   node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           45s                    node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	
	
	Name:               ha-805293-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_02_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:02:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:12:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:12:38 +0000   Mon, 30 Sep 2024 20:12:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:12:38 +0000   Mon, 30 Sep 2024 20:12:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:12:38 +0000   Mon, 30 Sep 2024 20:12:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:12:38 +0000   Mon, 30 Sep 2024 20:12:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-805293-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d290a9661d284f5abbb0966111b1ff62
	  System UUID:                d290a966-1d28-4f5a-bbb0-966111b1ff62
	  Boot ID:                    b4ddf9c2-1033-4cfc-9926-b02a54b07142
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nfncv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-805293-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-qrhb8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-805293-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-805293-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-b9cpp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-805293-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-805293-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 42s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-805293-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-805293-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-805293-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	  Normal   RegisteredNode           2m16s              node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	  Normal   RegisteredNode           2m5s               node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	  Normal   NodeNotReady             96s                node-controller  Node ha-805293-m03 status is now: NodeNotReady
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  60s (x2 over 60s)  kubelet          Node ha-805293-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x2 over 60s)  kubelet          Node ha-805293-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x2 over 60s)  kubelet          Node ha-805293-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 60s                kubelet          Node ha-805293-m03 has been rebooted, boot id: b4ddf9c2-1033-4cfc-9926-b02a54b07142
	  Normal   NodeReady                60s                kubelet          Node ha-805293-m03 status is now: NodeReady
	  Normal   RegisteredNode           45s                node-controller  Node ha-805293-m03 event: Registered Node ha-805293-m03 in Controller
	
	
	Name:               ha-805293-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_03_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:03:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:12:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:12:58 +0000   Mon, 30 Sep 2024 20:12:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:12:58 +0000   Mon, 30 Sep 2024 20:12:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:12:58 +0000   Mon, 30 Sep 2024 20:12:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:12:58 +0000   Mon, 30 Sep 2024 20:12:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-805293-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66e464978dbd400d9e13327c67f50978
	  System UUID:                66e46497-8dbd-400d-9e13-327c67f50978
	  Boot ID:                    6e1244f9-7880-4f80-9034-5826420e0122
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pk4z9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-7hn94    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 9m54s              kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-805293-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-805293-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-805293-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m57s              node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal   RegisteredNode           9m57s              node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal   RegisteredNode           9m56s              node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal   NodeReady                9m39s              kubelet          Node ha-805293-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m16s              node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal   RegisteredNode           2m5s               node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal   NodeNotReady             96s                node-controller  Node ha-805293-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           45s                node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s                 kubelet          Node ha-805293-m04 has been rebooted, boot id: 6e1244f9-7880-4f80-9034-5826420e0122
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-805293-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-805293-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-805293-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                9s                 kubelet          Node ha-805293-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.789974] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.062566] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063093] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.202518] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.124623] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.268552] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +3.977529] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.564932] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.062130] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.342874] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.088317] kauditd_printk_skb: 79 callbacks suppressed
	[Sep30 20:00] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.197664] kauditd_printk_skb: 38 callbacks suppressed
	[ +40.392588] kauditd_printk_skb: 26 callbacks suppressed
	[Sep30 20:06] kauditd_printk_skb: 1 callbacks suppressed
	[Sep30 20:10] systemd-fstab-generator[3832]: Ignoring "noauto" option for root device
	[  +0.147186] systemd-fstab-generator[3844]: Ignoring "noauto" option for root device
	[  +0.197988] systemd-fstab-generator[3858]: Ignoring "noauto" option for root device
	[  +0.165734] systemd-fstab-generator[3870]: Ignoring "noauto" option for root device
	[  +0.283923] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[  +3.707715] systemd-fstab-generator[3994]: Ignoring "noauto" option for root device
	[  +3.457916] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.008545] kauditd_printk_skb: 85 callbacks suppressed
	[Sep30 20:11] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c] <==
	{"level":"info","ts":"2024-09-30T20:08:31.643045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c received MsgPreVoteResp from ac0ce77fb984259c at term 2"}
	{"level":"info","ts":"2024-09-30T20:08:31.643119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c [logterm: 2, index: 2084] sent MsgPreVote request to 2f3ead44f397c7d2 at term 2"}
	{"level":"info","ts":"2024-09-30T20:08:31.643146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c [logterm: 2, index: 2084] sent MsgPreVote request to 5403ce2c8324712e at term 2"}
	{"level":"warn","ts":"2024-09-30T20:08:31.667466Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.3:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:08:31.667521Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.3:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-30T20:08:31.667595Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ac0ce77fb984259c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-30T20:08:31.667809Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.667899Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.668003Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.668224Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.668357Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.668453Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.668500Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.668524Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.668551Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.668626Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.668764Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.668835Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.668937Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.669004Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.672668Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.3:2380"}
	{"level":"warn","ts":"2024-09-30T20:08:31.672687Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.260749104s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-30T20:08:31.672830Z","caller":"traceutil/trace.go:171","msg":"trace[966761164] range","detail":"{range_begin:; range_end:; }","duration":"9.260907346s","start":"2024-09-30T20:08:22.411913Z","end":"2024-09-30T20:08:31.672820Z","steps":["trace[966761164] 'agreement among raft nodes before linearized reading'  (duration: 9.260744943s)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:08:31.672789Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.3:2380"}
	{"level":"info","ts":"2024-09-30T20:08:31.672942Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-805293","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.3:2380"],"advertise-client-urls":["https://192.168.39.3:2379"]}
	
	
	==> etcd [3b4f5919856e7020e2eb736700dcc60faf49bb3549a20d86cecc06833256227d] <==
	{"level":"warn","ts":"2024-09-30T20:12:01.505896Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:12:01.535445Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:12:01.537628Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:12:01.577247Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:12:01.659717Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:12:01.677398Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ac0ce77fb984259c","from":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T20:12:05.073219Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.227:2380/version","remote-member-id":"2f3ead44f397c7d2","error":"Get \"https://192.168.39.227:2380/version\": dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T20:12:05.073276Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2f3ead44f397c7d2","error":"Get \"https://192.168.39.227:2380/version\": dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T20:12:05.482384Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2f3ead44f397c7d2","rtt":"0s","error":"dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T20:12:05.482426Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2f3ead44f397c7d2","rtt":"0s","error":"dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T20:12:09.075102Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.227:2380/version","remote-member-id":"2f3ead44f397c7d2","error":"Get \"https://192.168.39.227:2380/version\": dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T20:12:09.075167Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2f3ead44f397c7d2","error":"Get \"https://192.168.39.227:2380/version\": dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T20:12:10.483534Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2f3ead44f397c7d2","rtt":"0s","error":"dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T20:12:10.483650Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2f3ead44f397c7d2","rtt":"0s","error":"dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T20:12:13.077051Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.227:2380/version","remote-member-id":"2f3ead44f397c7d2","error":"Get \"https://192.168.39.227:2380/version\": dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T20:12:13.077220Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2f3ead44f397c7d2","error":"Get \"https://192.168.39.227:2380/version\": dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-30T20:12:14.612267Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:12:14.617802Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:12:14.617785Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:12:14.642710Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ac0ce77fb984259c","to":"2f3ead44f397c7d2","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-30T20:12:14.642838Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:12:14.643839Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ac0ce77fb984259c","to":"2f3ead44f397c7d2","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-30T20:12:14.643886Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"warn","ts":"2024-09-30T20:13:02.221808Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.966592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-6gnt4\" ","response":"range_response_count:1 size:4887"}
	{"level":"info","ts":"2024-09-30T20:13:02.221932Z","caller":"traceutil/trace.go:171","msg":"trace[681394436] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-6gnt4; range_end:; response_count:1; response_revision:2453; }","duration":"127.14926ms","start":"2024-09-30T20:13:02.094760Z","end":"2024-09-30T20:13:02.221909Z","steps":["trace[681394436] 'range keys from in-memory index tree'  (duration: 126.136004ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:13:07 up 13 min,  0 users,  load average: 0.14, 0.42, 0.29
	Linux ha-805293 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [380bd6e347263a3f2a049efae6b9b292d5c95a687b571ed3542ef7673141a92f] <==
	I0930 20:12:36.014604       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:12:46.007879       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:12:46.007994       1 main.go:299] handling current node
	I0930 20:12:46.008023       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:12:46.008042       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:12:46.008188       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:12:46.008212       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:12:46.008271       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:12:46.008353       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:12:56.007773       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:12:56.007817       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:12:56.007932       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:12:56.007950       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:12:56.008008       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:12:56.008027       1 main.go:299] handling current node
	I0930 20:12:56.008042       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:12:56.008047       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:13:06.007877       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:13:06.007950       1 main.go:299] handling current node
	I0930 20:13:06.007972       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:13:06.007980       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:13:06.008154       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:13:06.008177       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:13:06.008251       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:13:06.008257       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa] <==
	I0930 20:08:03.352100       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:08:03.352205       1 main.go:299] handling current node
	I0930 20:08:03.352219       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:08:03.352225       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:08:03.352431       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:08:03.352440       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:08:03.352495       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:08:03.352501       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:08:13.352708       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:08:13.352903       1 main.go:299] handling current node
	I0930 20:08:13.352944       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:08:13.353013       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:08:13.353414       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:08:13.353493       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:08:13.353612       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:08:13.353635       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:08:23.353816       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:08:23.353887       1 main.go:299] handling current node
	I0930 20:08:23.353919       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:08:23.353928       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:08:23.354115       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:08:23.354139       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:08:23.354197       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:08:23.354215       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	E0930 20:08:29.411634       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	
	
	==> kube-apiserver [9a945cf678b444c95ced3c0655fedd7e24a271a0269cf64af94ee977600d79ad] <==
	I0930 20:10:59.076227       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0930 20:10:59.076438       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0930 20:10:59.160065       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0930 20:10:59.160891       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 20:10:59.161158       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 20:10:59.162207       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 20:10:59.162339       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0930 20:10:59.162388       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 20:10:59.171454       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0930 20:10:59.171624       1 aggregator.go:171] initial CRD sync complete...
	I0930 20:10:59.171681       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 20:10:59.171713       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 20:10:59.171742       1 cache.go:39] Caches are synced for autoregister controller
	I0930 20:10:59.173006       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0930 20:10:59.203189       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 20:10:59.214077       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 20:10:59.214165       1 policy_source.go:224] refreshing policies
	I0930 20:10:59.253464       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0930 20:10:59.260635       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0930 20:10:59.268525       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.227]
	I0930 20:10:59.270239       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 20:10:59.281267       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0930 20:10:59.285163       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0930 20:11:00.060521       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0930 20:11:00.500848       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.3]
	
	
	==> kube-apiserver [e6f45850bfc7eb9db0b4c4a227b97d9fe0d1f99e266d77e9b66fc2797453326c] <==
	I0930 20:10:15.301711       1 options.go:228] external host was not specified, using 192.168.39.3
	I0930 20:10:15.316587       1 server.go:142] Version: v1.31.1
	I0930 20:10:15.316643       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:10:16.308403       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 20:10:16.312615       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0930 20:10:16.314558       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0930 20:10:16.314587       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0930 20:10:16.315076       1 instance.go:232] Using reconciler: lease
	W0930 20:10:36.296616       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0930 20:10:36.313188       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0930 20:10:36.316574       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	W0930 20:10:36.316595       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-controller-manager [5b7eae086adfa13129e0ee64055dbf5ecef59b6cbb57e8c3f82ec0b37998f6d8] <==
	I0930 20:10:16.966839       1 serving.go:386] Generated self-signed cert in-memory
	I0930 20:10:17.241608       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0930 20:10:17.241647       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:10:17.243087       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0930 20:10:17.243204       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0930 20:10:17.243451       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0930 20:10:17.243812       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0930 20:10:37.321872       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.3:8443/healthz\": dial tcp 192.168.39.3:8443: connect: connection refused"
	
	
	==> kube-controller-manager [a985f5a2a7c076eb4cf77a8b507f759819a444134f93b1df5e5932da65c1270e] <==
	I0930 20:11:31.492804       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m03"
	I0930 20:11:31.516599       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:11:31.519791       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m03"
	I0930 20:11:31.550857       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.730751ms"
	I0930 20:11:31.550984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.72µs"
	I0930 20:11:32.621855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m03"
	I0930 20:11:34.064843       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.474057ms"
	I0930 20:11:34.065604       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="344.882µs"
	I0930 20:11:36.743679       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m03"
	I0930 20:11:39.442584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m02"
	I0930 20:11:42.702792       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:11:46.834200       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:12:07.432640       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m03"
	I0930 20:12:07.452416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m03"
	I0930 20:12:07.541518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m03"
	I0930 20:12:08.352485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="128.482µs"
	I0930 20:12:22.331550       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:12:22.422242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:12:28.616909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.799757ms"
	I0930 20:12:28.617633       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="72.427µs"
	I0930 20:12:38.079467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m03"
	I0930 20:12:58.742821       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:12:58.743524       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-805293-m04"
	I0930 20:12:58.758958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:13:01.722128       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	
	
	==> kube-proxy [5a30cbd3eb0f4ef05c7391f3280d861cd10d0fe8ba20335ae73fcbe214e80a9e] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:10:18.595797       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-805293\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 20:10:21.668229       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-805293\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 20:10:24.738810       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-805293\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 20:10:30.883624       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-805293\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 20:10:40.099646       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-805293\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0930 20:10:57.356484       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.3"]
	E0930 20:10:57.361677       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:10:57.398142       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:10:57.398234       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:10:57.398275       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:10:57.400593       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:10:57.400913       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:10:57.401088       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:10:57.402779       1 config.go:199] "Starting service config controller"
	I0930 20:10:57.402864       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:10:57.402910       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:10:57.402937       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:10:57.403632       1 config.go:328] "Starting node config controller"
	I0930 20:10:57.403687       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:10:57.504385       1 shared_informer.go:320] Caches are synced for node config
	I0930 20:10:57.504477       1 shared_informer.go:320] Caches are synced for service config
	I0930 20:10:57.504488       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088] <==
	E0930 20:07:20.099660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:20.099833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:20.099872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:26.565918       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:26.566172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:26.566731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:26.566999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:26.566564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:26.567212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:35.780809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713": dial tcp 192.168.39.254:8443: connect: no route to host
	W0930 20:07:35.780915       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:35.780961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0930 20:07:35.781010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:38.851644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:38.851765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:51.139100       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:51.139211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:51.139556       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:51.139637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:54.211818       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:54.211893       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:08:24.930850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:08:24.931045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:08:31.076856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:08:31.077089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463] <==
	W0930 19:59:54.769876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.770087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 19:59:56.900381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 20:02:01.539050       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-h6pvg\": pod kube-proxy-h6pvg is already assigned to node \"ha-805293-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-h6pvg" node="ha-805293-m03"
	E0930 20:02:01.539424       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9860392c-eca6-4200-9b6e-f0a6f51b523b(kube-system/kube-proxy-h6pvg) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-h6pvg"
	E0930 20:02:01.539482       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-h6pvg\": pod kube-proxy-h6pvg is already assigned to node \"ha-805293-m03\"" pod="kube-system/kube-proxy-h6pvg"
	I0930 20:02:01.539558       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-h6pvg" node="ha-805293-m03"
	E0930 20:02:29.833811       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lshpm\": pod busybox-7dff88458-lshpm is already assigned to node \"ha-805293-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lshpm" node="ha-805293-m02"
	E0930 20:02:29.833910       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lshpm\": pod busybox-7dff88458-lshpm is already assigned to node \"ha-805293-m02\"" pod="default/busybox-7dff88458-lshpm"
	E0930 20:08:16.746057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0930 20:08:20.006558       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0930 20:08:20.376984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0930 20:08:20.532839       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0930 20:08:21.975983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0930 20:08:22.493855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0930 20:08:24.078452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0930 20:08:25.517228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0930 20:08:25.521965       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0930 20:08:26.124779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0930 20:08:26.396541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0930 20:08:27.181371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0930 20:08:28.995877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0930 20:08:29.133144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0930 20:08:29.550636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0930 20:08:31.492740       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d9458794f1a510009238ae84c24f002abcd8dd8cfe472470a8cefb49c2d1d1ff] <==
	W0930 20:10:52.895389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.3:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:52.895444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.3:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:53.258661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.3:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:53.258822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.3:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:53.885193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.3:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:53.885390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.3:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:54.584840       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.3:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:54.584919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.3:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:55.401838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.3:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:55.401971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.3:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:55.453229       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.3:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:55.453372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.3:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:55.516878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.3:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:55.517000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.3:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:56.271412       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.3:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:56.271472       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.3:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:56.676785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.3:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:56.676844       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.3:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:59.084595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 20:10:59.084767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:10:59.085142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 20:10:59.085229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:10:59.093841       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 20:10:59.093973       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0930 20:11:16.302416       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 20:12:12 ha-805293 kubelet[1307]: E0930 20:12:12.705640    1307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1912fdf8-d789-4ba9-99ff-c87ccbf330ec)\"" pod="kube-system/storage-provisioner" podUID="1912fdf8-d789-4ba9-99ff-c87ccbf330ec"
	Sep 30 20:12:16 ha-805293 kubelet[1307]: E0930 20:12:16.946636    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727136945153349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:12:16 ha-805293 kubelet[1307]: E0930 20:12:16.949093    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727136945153349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:12:25 ha-805293 kubelet[1307]: I0930 20:12:25.705038    1307 scope.go:117] "RemoveContainer" containerID="2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc"
	Sep 30 20:12:25 ha-805293 kubelet[1307]: E0930 20:12:25.705612    1307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1912fdf8-d789-4ba9-99ff-c87ccbf330ec)\"" pod="kube-system/storage-provisioner" podUID="1912fdf8-d789-4ba9-99ff-c87ccbf330ec"
	Sep 30 20:12:26 ha-805293 kubelet[1307]: E0930 20:12:26.950825    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727146950557737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:12:26 ha-805293 kubelet[1307]: E0930 20:12:26.950867    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727146950557737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:12:36 ha-805293 kubelet[1307]: I0930 20:12:36.705518    1307 scope.go:117] "RemoveContainer" containerID="2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc"
	Sep 30 20:12:36 ha-805293 kubelet[1307]: E0930 20:12:36.706218    1307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1912fdf8-d789-4ba9-99ff-c87ccbf330ec)\"" pod="kube-system/storage-provisioner" podUID="1912fdf8-d789-4ba9-99ff-c87ccbf330ec"
	Sep 30 20:12:36 ha-805293 kubelet[1307]: E0930 20:12:36.952706    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727156952385345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:12:36 ha-805293 kubelet[1307]: E0930 20:12:36.952791    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727156952385345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:12:46 ha-805293 kubelet[1307]: E0930 20:12:46.955264    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727166954933272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:12:46 ha-805293 kubelet[1307]: E0930 20:12:46.955353    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727166954933272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:12:50 ha-805293 kubelet[1307]: I0930 20:12:50.705637    1307 scope.go:117] "RemoveContainer" containerID="2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc"
	Sep 30 20:12:50 ha-805293 kubelet[1307]: E0930 20:12:50.705855    1307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1912fdf8-d789-4ba9-99ff-c87ccbf330ec)\"" pod="kube-system/storage-provisioner" podUID="1912fdf8-d789-4ba9-99ff-c87ccbf330ec"
	Sep 30 20:12:56 ha-805293 kubelet[1307]: E0930 20:12:56.735532    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 20:12:56 ha-805293 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 20:12:56 ha-805293 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 20:12:56 ha-805293 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 20:12:56 ha-805293 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 20:12:56 ha-805293 kubelet[1307]: E0930 20:12:56.957729    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727176957233640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:12:56 ha-805293 kubelet[1307]: E0930 20:12:56.957772    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727176957233640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:13:05 ha-805293 kubelet[1307]: I0930 20:13:05.705403    1307 scope.go:117] "RemoveContainer" containerID="2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc"
	Sep 30 20:13:06 ha-805293 kubelet[1307]: E0930 20:13:06.960247    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727186959218911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:13:06 ha-805293 kubelet[1307]: E0930 20:13:06.960317    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727186959218911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 20:13:06.209756   33436 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19736-7672/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-805293 -n ha-805293
helpers_test.go:261: (dbg) Run:  kubectl --context ha-805293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (399.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 stop -v=7 --alsologtostderr
E0930 20:13:28.936117   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-805293 stop -v=7 --alsologtostderr: exit status 82 (2m0.475931276s)

                                                
                                                
-- stdout --
	* Stopping node "ha-805293-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 20:13:25.646038   33875 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:13:25.646168   33875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:13:25.646179   33875 out.go:358] Setting ErrFile to fd 2...
	I0930 20:13:25.646185   33875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:13:25.646397   33875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:13:25.646647   33875 out.go:352] Setting JSON to false
	I0930 20:13:25.646743   33875 mustload.go:65] Loading cluster: ha-805293
	I0930 20:13:25.647137   33875 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:13:25.647236   33875 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:13:25.647435   33875 mustload.go:65] Loading cluster: ha-805293
	I0930 20:13:25.647638   33875 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:13:25.647688   33875 stop.go:39] StopHost: ha-805293-m04
	I0930 20:13:25.648063   33875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:13:25.648113   33875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:13:25.663737   33875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34729
	I0930 20:13:25.664286   33875 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:13:25.664948   33875 main.go:141] libmachine: Using API Version  1
	I0930 20:13:25.664973   33875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:13:25.665339   33875 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:13:25.667902   33875 out.go:177] * Stopping node "ha-805293-m04"  ...
	I0930 20:13:25.669462   33875 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 20:13:25.669500   33875 main.go:141] libmachine: (ha-805293-m04) Calling .DriverName
	I0930 20:13:25.669760   33875 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 20:13:25.669789   33875 main.go:141] libmachine: (ha-805293-m04) Calling .GetSSHHostname
	I0930 20:13:25.673047   33875 main.go:141] libmachine: (ha-805293-m04) DBG | domain ha-805293-m04 has defined MAC address 52:54:00:fb:22:e7 in network mk-ha-805293
	I0930 20:13:25.673565   33875 main.go:141] libmachine: (ha-805293-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:22:e7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 21:12:52 +0000 UTC Type:0 Mac:52:54:00:fb:22:e7 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-805293-m04 Clientid:01:52:54:00:fb:22:e7}
	I0930 20:13:25.673591   33875 main.go:141] libmachine: (ha-805293-m04) DBG | domain ha-805293-m04 has defined IP address 192.168.39.92 and MAC address 52:54:00:fb:22:e7 in network mk-ha-805293
	I0930 20:13:25.673776   33875 main.go:141] libmachine: (ha-805293-m04) Calling .GetSSHPort
	I0930 20:13:25.673974   33875 main.go:141] libmachine: (ha-805293-m04) Calling .GetSSHKeyPath
	I0930 20:13:25.674115   33875 main.go:141] libmachine: (ha-805293-m04) Calling .GetSSHUsername
	I0930 20:13:25.674253   33875 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293-m04/id_rsa Username:docker}
	I0930 20:13:25.761505   33875 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0930 20:13:25.815679   33875 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0930 20:13:25.868046   33875 main.go:141] libmachine: Stopping "ha-805293-m04"...
	I0930 20:13:25.868082   33875 main.go:141] libmachine: (ha-805293-m04) Calling .GetState
	I0930 20:13:25.869497   33875 main.go:141] libmachine: (ha-805293-m04) Calling .Stop
	I0930 20:13:25.873260   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 0/120
	I0930 20:13:26.875058   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 1/120
	I0930 20:13:27.876323   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 2/120
	I0930 20:13:28.877694   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 3/120
	I0930 20:13:29.879129   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 4/120
	I0930 20:13:30.881253   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 5/120
	I0930 20:13:31.882838   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 6/120
	I0930 20:13:32.884159   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 7/120
	I0930 20:13:33.886136   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 8/120
	I0930 20:13:34.887569   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 9/120
	I0930 20:13:35.888903   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 10/120
	I0930 20:13:36.890191   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 11/120
	I0930 20:13:37.891678   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 12/120
	I0930 20:13:38.893123   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 13/120
	I0930 20:13:39.894508   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 14/120
	I0930 20:13:40.896381   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 15/120
	I0930 20:13:41.897964   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 16/120
	I0930 20:13:42.900247   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 17/120
	I0930 20:13:43.902177   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 18/120
	I0930 20:13:44.903592   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 19/120
	I0930 20:13:45.905652   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 20/120
	I0930 20:13:46.907095   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 21/120
	I0930 20:13:47.908555   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 22/120
	I0930 20:13:48.909981   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 23/120
	I0930 20:13:49.911660   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 24/120
	I0930 20:13:50.914054   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 25/120
	I0930 20:13:51.916121   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 26/120
	I0930 20:13:52.917536   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 27/120
	I0930 20:13:53.919115   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 28/120
	I0930 20:13:54.920928   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 29/120
	I0930 20:13:55.923189   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 30/120
	I0930 20:13:56.924695   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 31/120
	I0930 20:13:57.926113   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 32/120
	I0930 20:13:58.927307   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 33/120
	I0930 20:13:59.928737   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 34/120
	I0930 20:14:00.930695   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 35/120
	I0930 20:14:01.932388   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 36/120
	I0930 20:14:02.933835   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 37/120
	I0930 20:14:03.935346   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 38/120
	I0930 20:14:04.936677   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 39/120
	I0930 20:14:05.938125   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 40/120
	I0930 20:14:06.939715   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 41/120
	I0930 20:14:07.941152   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 42/120
	I0930 20:14:08.942354   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 43/120
	I0930 20:14:09.944107   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 44/120
	I0930 20:14:10.946178   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 45/120
	I0930 20:14:11.947639   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 46/120
	I0930 20:14:12.949355   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 47/120
	I0930 20:14:13.951157   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 48/120
	I0930 20:14:14.952551   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 49/120
	I0930 20:14:15.954184   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 50/120
	I0930 20:14:16.956655   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 51/120
	I0930 20:14:17.957913   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 52/120
	I0930 20:14:18.959309   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 53/120
	I0930 20:14:19.960506   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 54/120
	I0930 20:14:20.962723   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 55/120
	I0930 20:14:21.964640   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 56/120
	I0930 20:14:22.966074   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 57/120
	I0930 20:14:23.967824   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 58/120
	I0930 20:14:24.969237   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 59/120
	I0930 20:14:25.970644   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 60/120
	I0930 20:14:26.972314   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 61/120
	I0930 20:14:27.974122   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 62/120
	I0930 20:14:28.975697   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 63/120
	I0930 20:14:29.977001   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 64/120
	I0930 20:14:30.978386   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 65/120
	I0930 20:14:31.979930   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 66/120
	I0930 20:14:32.981537   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 67/120
	I0930 20:14:33.982865   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 68/120
	I0930 20:14:34.984559   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 69/120
	I0930 20:14:35.986785   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 70/120
	I0930 20:14:36.988872   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 71/120
	I0930 20:14:37.991216   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 72/120
	I0930 20:14:38.992743   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 73/120
	I0930 20:14:39.994046   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 74/120
	I0930 20:14:40.995841   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 75/120
	I0930 20:14:41.998077   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 76/120
	I0930 20:14:42.999415   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 77/120
	I0930 20:14:44.000811   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 78/120
	I0930 20:14:45.001982   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 79/120
	I0930 20:14:46.004167   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 80/120
	I0930 20:14:47.005514   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 81/120
	I0930 20:14:48.006969   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 82/120
	I0930 20:14:49.008101   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 83/120
	I0930 20:14:50.010087   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 84/120
	I0930 20:14:51.012393   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 85/120
	I0930 20:14:52.014538   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 86/120
	I0930 20:14:53.015898   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 87/120
	I0930 20:14:54.017277   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 88/120
	I0930 20:14:55.019348   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 89/120
	I0930 20:14:56.021490   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 90/120
	I0930 20:14:57.022955   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 91/120
	I0930 20:14:58.024312   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 92/120
	I0930 20:14:59.025806   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 93/120
	I0930 20:15:00.027282   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 94/120
	I0930 20:15:01.028602   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 95/120
	I0930 20:15:02.030131   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 96/120
	I0930 20:15:03.031881   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 97/120
	I0930 20:15:04.034279   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 98/120
	I0930 20:15:05.036059   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 99/120
	I0930 20:15:06.038070   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 100/120
	I0930 20:15:07.039486   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 101/120
	I0930 20:15:08.040994   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 102/120
	I0930 20:15:09.042193   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 103/120
	I0930 20:15:10.043655   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 104/120
	I0930 20:15:11.045473   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 105/120
	I0930 20:15:12.047780   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 106/120
	I0930 20:15:13.049235   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 107/120
	I0930 20:15:14.051301   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 108/120
	I0930 20:15:15.052697   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 109/120
	I0930 20:15:16.055011   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 110/120
	I0930 20:15:17.056308   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 111/120
	I0930 20:15:18.058313   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 112/120
	I0930 20:15:19.059724   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 113/120
	I0930 20:15:20.061907   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 114/120
	I0930 20:15:21.064098   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 115/120
	I0930 20:15:22.065596   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 116/120
	I0930 20:15:23.066829   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 117/120
	I0930 20:15:24.069032   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 118/120
	I0930 20:15:25.070591   33875 main.go:141] libmachine: (ha-805293-m04) Waiting for machine to stop 119/120
	I0930 20:15:26.071331   33875 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0930 20:15:26.071395   33875 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0930 20:15:26.073292   33875 out.go:201] 
	W0930 20:15:26.074759   33875 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0930 20:15:26.074781   33875 out.go:270] * 
	* 
	W0930 20:15:26.077018   33875 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 20:15:26.078351   33875 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-805293 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr: (18.924127685s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-805293 -n ha-805293
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-805293 logs -n 25: (1.632080982s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-805293 ssh -n ha-805293-m02 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04:/home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m04 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp testdata/cp-test.txt                                                | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3144947660/001/cp-test_ha-805293-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293:/home/docker/cp-test_ha-805293-m04_ha-805293.txt                       |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293 sudo cat                                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293.txt                                 |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m02:/home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m02 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m03:/home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n                                                                 | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | ha-805293-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-805293 ssh -n ha-805293-m03 sudo cat                                          | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC | 30 Sep 24 20:03 UTC |
	|         | /home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-805293 node stop m02 -v=7                                                     | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-805293 node start m02 -v=7                                                    | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-805293 -v=7                                                           | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-805293 -v=7                                                                | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-805293 --wait=true -v=7                                                    | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:08 UTC | 30 Sep 24 20:13 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-805293                                                                | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:13 UTC |                     |
	| node    | ha-805293 node delete m03 -v=7                                                   | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:13 UTC | 30 Sep 24 20:13 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-805293 stop -v=7                                                              | ha-805293 | jenkins | v1.34.0 | 30 Sep 24 20:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 20:08:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 20:08:30.418253   32024 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:08:30.418464   32024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:08:30.418472   32024 out.go:358] Setting ErrFile to fd 2...
	I0930 20:08:30.418476   32024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:08:30.418682   32024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:08:30.419207   32024 out.go:352] Setting JSON to false
	I0930 20:08:30.420095   32024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3053,"bootTime":1727723857,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 20:08:30.420187   32024 start.go:139] virtualization: kvm guest
	I0930 20:08:30.422949   32024 out.go:177] * [ha-805293] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 20:08:30.424884   32024 notify.go:220] Checking for updates...
	I0930 20:08:30.424943   32024 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 20:08:30.426796   32024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 20:08:30.428229   32024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:08:30.429602   32024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:08:30.430777   32024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 20:08:30.432201   32024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 20:08:30.434290   32024 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:08:30.434444   32024 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 20:08:30.435145   32024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:08:30.435205   32024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:08:30.450636   32024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33767
	I0930 20:08:30.451136   32024 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:08:30.451747   32024 main.go:141] libmachine: Using API Version  1
	I0930 20:08:30.451770   32024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:08:30.452071   32024 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:08:30.452248   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:08:30.492997   32024 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 20:08:30.494236   32024 start.go:297] selected driver: kvm2
	I0930 20:08:30.494249   32024 start.go:901] validating driver "kvm2" against &{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:08:30.494410   32024 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 20:08:30.494805   32024 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:08:30.494892   32024 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 20:08:30.510418   32024 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 20:08:30.511136   32024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:08:30.511169   32024 cni.go:84] Creating CNI manager for ""
	I0930 20:08:30.511226   32024 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 20:08:30.511296   32024 start.go:340] cluster config:
	{Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fa
lse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:08:30.511444   32024 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:08:30.513900   32024 out.go:177] * Starting "ha-805293" primary control-plane node in "ha-805293" cluster
	I0930 20:08:30.515215   32024 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:08:30.515255   32024 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 20:08:30.515262   32024 cache.go:56] Caching tarball of preloaded images
	I0930 20:08:30.515346   32024 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:08:30.515357   32024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:08:30.515497   32024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/config.json ...
	I0930 20:08:30.515752   32024 start.go:360] acquireMachinesLock for ha-805293: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:08:30.515795   32024 start.go:364] duration metric: took 22.459µs to acquireMachinesLock for "ha-805293"
	I0930 20:08:30.515809   32024 start.go:96] Skipping create...Using existing machine configuration
	I0930 20:08:30.515820   32024 fix.go:54] fixHost starting: 
	I0930 20:08:30.516119   32024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:08:30.516149   32024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:08:30.531066   32024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34215
	I0930 20:08:30.531581   32024 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:08:30.532122   32024 main.go:141] libmachine: Using API Version  1
	I0930 20:08:30.532144   32024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:08:30.532477   32024 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:08:30.532668   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:08:30.532840   32024 main.go:141] libmachine: (ha-805293) Calling .GetState
	I0930 20:08:30.534680   32024 fix.go:112] recreateIfNeeded on ha-805293: state=Running err=<nil>
	W0930 20:08:30.534697   32024 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 20:08:30.537614   32024 out.go:177] * Updating the running kvm2 "ha-805293" VM ...
	I0930 20:08:30.538929   32024 machine.go:93] provisionDockerMachine start ...
	I0930 20:08:30.538949   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:08:30.539155   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:08:30.541855   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.542282   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:30.542317   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.542475   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:08:30.542630   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:30.542771   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:30.542918   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:08:30.543058   32024 main.go:141] libmachine: Using SSH client type: native
	I0930 20:08:30.543244   32024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 20:08:30.543257   32024 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 20:08:30.656762   32024 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293
	
	I0930 20:08:30.656796   32024 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 20:08:30.657046   32024 buildroot.go:166] provisioning hostname "ha-805293"
	I0930 20:08:30.657072   32024 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 20:08:30.657248   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:08:30.660420   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.660872   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:30.660894   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.661136   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:08:30.661353   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:30.661545   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:30.661717   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:08:30.661925   32024 main.go:141] libmachine: Using SSH client type: native
	I0930 20:08:30.662108   32024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 20:08:30.662122   32024 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-805293 && echo "ha-805293" | sudo tee /etc/hostname
	I0930 20:08:30.791961   32024 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-805293
	
	I0930 20:08:30.791987   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:08:30.794822   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.795340   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:30.795377   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.795573   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:08:30.795765   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:30.795931   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:30.796149   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:08:30.796313   32024 main.go:141] libmachine: Using SSH client type: native
	I0930 20:08:30.796515   32024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 20:08:30.796531   32024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-805293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-805293/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-805293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:08:30.912570   32024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:08:30.912600   32024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:08:30.912630   32024 buildroot.go:174] setting up certificates
	I0930 20:08:30.912639   32024 provision.go:84] configureAuth start
	I0930 20:08:30.912651   32024 main.go:141] libmachine: (ha-805293) Calling .GetMachineName
	I0930 20:08:30.912937   32024 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 20:08:30.915955   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.916436   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:30.916458   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.916664   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:08:30.919166   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.919734   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:30.919754   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:30.919967   32024 provision.go:143] copyHostCerts
	I0930 20:08:30.919995   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:08:30.920034   32024 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:08:30.920047   32024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:08:30.920115   32024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:08:30.920233   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:08:30.920255   32024 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:08:30.920262   32024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:08:30.920288   32024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:08:30.920339   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:08:30.920362   32024 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:08:30.920366   32024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:08:30.920388   32024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:08:30.920433   32024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.ha-805293 san=[127.0.0.1 192.168.39.3 ha-805293 localhost minikube]
	I0930 20:08:31.201525   32024 provision.go:177] copyRemoteCerts
	I0930 20:08:31.201574   32024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:08:31.201595   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:08:31.204548   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:31.204916   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:31.204941   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:31.205106   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:08:31.205300   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:31.205461   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:08:31.205605   32024 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:08:31.294616   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 20:08:31.294694   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 20:08:31.318931   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 20:08:31.319001   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 20:08:31.344679   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 20:08:31.344753   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:08:31.371672   32024 provision.go:87] duration metric: took 459.016865ms to configureAuth
	I0930 20:08:31.371710   32024 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:08:31.371993   32024 config.go:182] Loaded profile config "ha-805293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:08:31.372113   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:08:31.374783   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:31.375292   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:08:31.375320   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:08:31.375489   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:08:31.375703   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:31.375874   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:08:31.376022   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:08:31.376198   32024 main.go:141] libmachine: Using SSH client type: native
	I0930 20:08:31.376434   32024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 20:08:31.376457   32024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:10:02.281536   32024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:10:02.281573   32024 machine.go:96] duration metric: took 1m31.742628586s to provisionDockerMachine
	I0930 20:10:02.281588   32024 start.go:293] postStartSetup for "ha-805293" (driver="kvm2")
	I0930 20:10:02.281603   32024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:10:02.281625   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:10:02.282026   32024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:10:02.282056   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:10:02.285598   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.286082   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:02.286105   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.286329   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:10:02.286528   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:10:02.286672   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:10:02.286865   32024 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:10:02.375728   32024 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:10:02.379946   32024 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:10:02.379971   32024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:10:02.380045   32024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:10:02.380144   32024 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:10:02.380159   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 20:10:02.380362   32024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:10:02.390415   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:10:02.416705   32024 start.go:296] duration metric: took 135.10016ms for postStartSetup
	I0930 20:10:02.416758   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:10:02.417057   32024 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 20:10:02.417083   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:10:02.420067   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.420528   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:02.420553   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.420759   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:10:02.420977   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:10:02.421145   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:10:02.421360   32024 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	W0930 20:10:02.509993   32024 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0930 20:10:02.510023   32024 fix.go:56] duration metric: took 1m31.994203409s for fixHost
	I0930 20:10:02.510049   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:10:02.513222   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.513651   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:02.513677   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.513915   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:10:02.514100   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:10:02.514356   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:10:02.514500   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:10:02.514648   32024 main.go:141] libmachine: Using SSH client type: native
	I0930 20:10:02.514811   32024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 20:10:02.514821   32024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:10:02.628437   32024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727727002.596415726
	
	I0930 20:10:02.628462   32024 fix.go:216] guest clock: 1727727002.596415726
	I0930 20:10:02.628472   32024 fix.go:229] Guest: 2024-09-30 20:10:02.596415726 +0000 UTC Remote: 2024-09-30 20:10:02.510031868 +0000 UTC m=+92.127739919 (delta=86.383858ms)
	I0930 20:10:02.628537   32024 fix.go:200] guest clock delta is within tolerance: 86.383858ms
	I0930 20:10:02.628544   32024 start.go:83] releasing machines lock for "ha-805293", held for 1m32.112740535s
	I0930 20:10:02.628572   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:10:02.628881   32024 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 20:10:02.631570   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.632018   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:02.632041   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.632360   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:10:02.633056   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:10:02.633246   32024 main.go:141] libmachine: (ha-805293) Calling .DriverName
	I0930 20:10:02.633344   32024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:10:02.633381   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:10:02.633509   32024 ssh_runner.go:195] Run: cat /version.json
	I0930 20:10:02.633536   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHHostname
	I0930 20:10:02.636273   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.636367   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.636658   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:02.636685   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.636714   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:02.636733   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:02.636836   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:10:02.636981   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHPort
	I0930 20:10:02.636998   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:10:02.637115   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:10:02.637185   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHKeyPath
	I0930 20:10:02.637240   32024 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:10:02.637287   32024 main.go:141] libmachine: (ha-805293) Calling .GetSSHUsername
	I0930 20:10:02.637416   32024 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/ha-805293/id_rsa Username:docker}
	I0930 20:10:02.756965   32024 ssh_runner.go:195] Run: systemctl --version
	I0930 20:10:02.763357   32024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:10:02.925524   32024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:10:02.933613   32024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:10:02.933678   32024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:10:02.943265   32024 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 20:10:02.943298   32024 start.go:495] detecting cgroup driver to use...
	I0930 20:10:02.943374   32024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:10:02.962359   32024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:10:02.978052   32024 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:10:02.978105   32024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:10:02.992673   32024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:10:03.006860   32024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:10:03.157020   32024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:10:03.312406   32024 docker.go:233] disabling docker service ...
	I0930 20:10:03.312477   32024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:10:03.330601   32024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:10:03.346087   32024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:10:03.513429   32024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:10:03.669065   32024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:10:03.684160   32024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:10:03.702667   32024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:10:03.702729   32024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.713687   32024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:10:03.713752   32024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.724817   32024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.735499   32024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.746372   32024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:10:03.757539   32024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.768261   32024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.779792   32024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:10:03.790851   32024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:10:03.801592   32024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:10:03.811688   32024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:10:03.958683   32024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:10:07.162069   32024 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.203347092s)
	I0930 20:10:07.162099   32024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:10:07.162144   32024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:10:07.172536   32024 start.go:563] Will wait 60s for crictl version
	I0930 20:10:07.172608   32024 ssh_runner.go:195] Run: which crictl
	I0930 20:10:07.176490   32024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:10:07.214938   32024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:10:07.215018   32024 ssh_runner.go:195] Run: crio --version
	I0930 20:10:07.247042   32024 ssh_runner.go:195] Run: crio --version
	I0930 20:10:07.277714   32024 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:10:07.279546   32024 main.go:141] libmachine: (ha-805293) Calling .GetIP
	I0930 20:10:07.282463   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:07.282896   32024 main.go:141] libmachine: (ha-805293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b8:c7", ip: ""} in network mk-ha-805293: {Iface:virbr1 ExpiryTime:2024-09-30 20:59:30 +0000 UTC Type:0 Mac:52:54:00:a8:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-805293 Clientid:01:52:54:00:a8:b8:c7}
	I0930 20:10:07.282925   32024 main.go:141] libmachine: (ha-805293) DBG | domain ha-805293 has defined IP address 192.168.39.3 and MAC address 52:54:00:a8:b8:c7 in network mk-ha-805293
	I0930 20:10:07.283156   32024 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:10:07.288124   32024 kubeadm.go:883] updating cluster {Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 20:10:07.288294   32024 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:10:07.288367   32024 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:10:07.331573   32024 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:10:07.331599   32024 crio.go:433] Images already preloaded, skipping extraction
	I0930 20:10:07.331650   32024 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:10:07.366799   32024 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:10:07.366825   32024 cache_images.go:84] Images are preloaded, skipping loading
	I0930 20:10:07.366836   32024 kubeadm.go:934] updating node { 192.168.39.3 8443 v1.31.1 crio true true} ...
	I0930 20:10:07.366940   32024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-805293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:10:07.367022   32024 ssh_runner.go:195] Run: crio config
	I0930 20:10:07.415231   32024 cni.go:84] Creating CNI manager for ""
	I0930 20:10:07.415255   32024 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 20:10:07.415264   32024 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 20:10:07.415293   32024 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-805293 NodeName:ha-805293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 20:10:07.415481   32024 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-805293"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 20:10:07.415504   32024 kube-vip.go:115] generating kube-vip config ...
	I0930 20:10:07.415560   32024 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 20:10:07.427100   32024 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 20:10:07.427231   32024 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 20:10:07.427299   32024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:10:07.437361   32024 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 20:10:07.437422   32024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 20:10:07.447137   32024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0930 20:10:07.463643   32024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:10:07.480909   32024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0930 20:10:07.497151   32024 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 20:10:07.513129   32024 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 20:10:07.517543   32024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:10:07.663227   32024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:10:07.677878   32024 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293 for IP: 192.168.39.3
	I0930 20:10:07.677900   32024 certs.go:194] generating shared ca certs ...
	I0930 20:10:07.677919   32024 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:10:07.678091   32024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:10:07.678147   32024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:10:07.678156   32024 certs.go:256] generating profile certs ...
	I0930 20:10:07.678262   32024 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/client.key
	I0930 20:10:07.678300   32024 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1490b8e9
	I0930 20:10:07.678329   32024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1490b8e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3 192.168.39.220 192.168.39.227 192.168.39.254]
	I0930 20:10:07.791960   32024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1490b8e9 ...
	I0930 20:10:07.791995   32024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1490b8e9: {Name:mk874f676f601a9161261dbafeec607626035cbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:10:07.792155   32024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1490b8e9 ...
	I0930 20:10:07.792166   32024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1490b8e9: {Name:mk6f1737ee8f44359c97ed002ae5fcd3f62cda77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:10:07.792233   32024 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt.1490b8e9 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt
	I0930 20:10:07.792392   32024 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key.1490b8e9 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key
	I0930 20:10:07.792518   32024 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key
	I0930 20:10:07.792532   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 20:10:07.792551   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 20:10:07.792570   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 20:10:07.792583   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 20:10:07.792596   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 20:10:07.792608   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 20:10:07.792620   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 20:10:07.792632   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 20:10:07.792677   32024 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:10:07.792704   32024 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:10:07.792710   32024 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:10:07.792733   32024 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:10:07.792754   32024 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:10:07.792777   32024 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:10:07.792815   32024 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:10:07.792840   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 20:10:07.792854   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 20:10:07.792866   32024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:10:07.793423   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:10:07.818870   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:10:07.843434   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:10:07.868173   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:10:07.891992   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 20:10:07.916550   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 20:10:07.942281   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:10:07.967426   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/ha-805293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 20:10:07.991808   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:10:08.016250   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:10:08.040767   32024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:10:08.065245   32024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 20:10:08.081730   32024 ssh_runner.go:195] Run: openssl version
	I0930 20:10:08.087602   32024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:10:08.098310   32024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:10:08.102714   32024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:10:08.102774   32024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:10:08.108094   32024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:10:08.117034   32024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:10:08.127515   32024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:10:08.131784   32024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:10:08.131843   32024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:10:08.137306   32024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:10:08.147812   32024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:10:08.158599   32024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:10:08.163420   32024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:10:08.163486   32024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:10:08.169078   32024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:10:08.179749   32024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:10:08.184378   32024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 20:10:08.190191   32024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 20:10:08.195744   32024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 20:10:08.201181   32024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 20:10:08.206851   32024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 20:10:08.212087   32024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 20:10:08.217419   32024 kubeadm.go:392] StartCluster: {Name:ha-805293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-805293 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:10:08.217521   32024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 20:10:08.217563   32024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 20:10:08.261130   32024 cri.go:89] found id: "a09228d49f4ad068623e6315524f56bf1711bcc27f73dc0878d7dc879947bb89"
	I0930 20:10:08.261163   32024 cri.go:89] found id: "587b1ad4b8191a4014e26828a32606215b3377cd45b366d4de0ed03ffb0b7837"
	I0930 20:10:08.261168   32024 cri.go:89] found id: "2d358322f532c68b803989835b3e2521f53c29d7958667ceeeaaca809b61ce74"
	I0930 20:10:08.261171   32024 cri.go:89] found id: "bcfa6f22eace82338bca9d52207525aa6bff9130f092366621e59b71f8225240"
	I0930 20:10:08.261174   32024 cri.go:89] found id: "8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b"
	I0930 20:10:08.261178   32024 cri.go:89] found id: "beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c"
	I0930 20:10:08.261180   32024 cri.go:89] found id: "e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa"
	I0930 20:10:08.261183   32024 cri.go:89] found id: "cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088"
	I0930 20:10:08.261185   32024 cri.go:89] found id: "5e8e1f537ce941dd5174a539d9c52bcdc043499fbf92875cdf6ed4fc819c4dbe"
	I0930 20:10:08.261191   32024 cri.go:89] found id: "0e9fbbe2017dac31afa6b99397b35147479d921bd1c28368d0863e7deba96963"
	I0930 20:10:08.261195   32024 cri.go:89] found id: "9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463"
	I0930 20:10:08.261198   32024 cri.go:89] found id: "219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c"
	I0930 20:10:08.261200   32024 cri.go:89] found id: "994c927aa147aaacb19c3dc9b54178374731ce435295e01ceb9dbb1854a78f78"
	I0930 20:10:08.261203   32024 cri.go:89] found id: ""
	I0930 20:10:08.261251   32024 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.633199520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727345633177587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6232c6f-1803-4b6d-ad47-671b28c86923 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.633819500Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=792b8601-fb84-4ffd-8f9f-72d10fcab4de name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.634020872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=792b8601-fb84-4ffd-8f9f-72d10fcab4de name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.634664282Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:799d0bb0c993d0ffde3eefbcc05bcb611d96d352cb0ea83e7022f8fbd550dd95,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727727185719881527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985f5a2a7c076eb4cf77a8b507f759819a444134f93b1df5e5932da65c1270e,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727727057730351169,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a945cf678b444c95ced3c0655fedd7e24a271a0269cf64af94ee977600d79ad,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727727056734933159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc285523fdce19c14d398147b8793713be6f6d52049dd8b29d2844a668b82645,PodSandboxId:0f134ad7b95b1f2e96670573b8bb737db2ee057af15552e2fb9e2d5f4e25e29f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727727047994332438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744a1c20ed6c3fe15442d117e472f677d759a07b2075fef70341de56d798d14b,PodSandboxId:bb6065f83dadf08926cabdd5d9999f932c0d8a6d5782ca9efd3b6f505284a827,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727727024835063908,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0b24a252ad7163810aa1bbddc4bc981,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a30cbd3eb0f4ef05c7391f3280d861cd10d0fe8ba20335ae73fcbe214e80a9e,PodSandboxId:ed86ec584c49134727b6ee9b95d6ebf6f92cc75139119cf0ea2b4be83d6df838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727727014815519285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:380bd6e347263a3f2a049efae6b9b292d5c95a687b571ed3542ef7673141a92f,PodSandboxId:4709331fb79f41392654d87d0cbba6850b4edafe1c7c72a0b9cffa363d1c2fb3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727727014834391279,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:6fed1262e64394560fbc057ea4f9f851d03675b41610f8834ec91e719fc78857,PodSandboxId:907a40f61fd35c956014f9d913d24ffce1e777898650629dce7c4a64c1a75eed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727014680588124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f45850bfc7eb9db0b4c4a227b97d9fe0d1f99e266d77e9b66fc2797453326c,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727727014612742689,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b7eae086adfa13129e0ee64055dbf5ecef59b6cbb57e8c3f82ec0b37998f6d8,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727727014583572877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727727014507776681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f5919856e7020e2eb736700dcc60faf49bb3549a20d86cecc06833256227d,PodSandboxId:8754efd58ac6fd709d308dbfc7dd062dbaebb39928b4645d4af510e8e3cfbb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727727014457184858,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9458794f1a510009238ae84c24f002abcd8dd8cfe472470a8cefb49c2d1d1ff,PodSandboxId:d6d05abaafe65ae0bf04bf51aef7e61d0aabc4fbc70b020c0d74daa5f0100475,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727727014414000533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77443ff4394cea6f0d035877e1e1513cab12a1648c096fad857654ededda1936,PodSandboxId:fd5726427f3e1d9295403eb0289cc84ce04bd43f38db9bd9ff5c93937cb4bad9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727011060782331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727726553788930169,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414317132948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414250226322,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727726402286751491,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727726402007394795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727726390313458555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727726390230834509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=792b8601-fb84-4ffd-8f9f-72d10fcab4de name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.690793277Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0839aff-e213-4d17-b9ad-fb6fae295ad9 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.690872955Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0839aff-e213-4d17-b9ad-fb6fae295ad9 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.692719777Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33662020-65a9-449f-a667-12e6d7b29881 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.693143880Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727345693120486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33662020-65a9-449f-a667-12e6d7b29881 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.693761235Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4f78f91-be06-48ad-bdf9-929d721b4ee6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.693827864Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4f78f91-be06-48ad-bdf9-929d721b4ee6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.694352459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:799d0bb0c993d0ffde3eefbcc05bcb611d96d352cb0ea83e7022f8fbd550dd95,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727727185719881527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985f5a2a7c076eb4cf77a8b507f759819a444134f93b1df5e5932da65c1270e,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727727057730351169,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a945cf678b444c95ced3c0655fedd7e24a271a0269cf64af94ee977600d79ad,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727727056734933159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc285523fdce19c14d398147b8793713be6f6d52049dd8b29d2844a668b82645,PodSandboxId:0f134ad7b95b1f2e96670573b8bb737db2ee057af15552e2fb9e2d5f4e25e29f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727727047994332438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744a1c20ed6c3fe15442d117e472f677d759a07b2075fef70341de56d798d14b,PodSandboxId:bb6065f83dadf08926cabdd5d9999f932c0d8a6d5782ca9efd3b6f505284a827,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727727024835063908,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0b24a252ad7163810aa1bbddc4bc981,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a30cbd3eb0f4ef05c7391f3280d861cd10d0fe8ba20335ae73fcbe214e80a9e,PodSandboxId:ed86ec584c49134727b6ee9b95d6ebf6f92cc75139119cf0ea2b4be83d6df838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727727014815519285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:380bd6e347263a3f2a049efae6b9b292d5c95a687b571ed3542ef7673141a92f,PodSandboxId:4709331fb79f41392654d87d0cbba6850b4edafe1c7c72a0b9cffa363d1c2fb3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727727014834391279,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:6fed1262e64394560fbc057ea4f9f851d03675b41610f8834ec91e719fc78857,PodSandboxId:907a40f61fd35c956014f9d913d24ffce1e777898650629dce7c4a64c1a75eed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727014680588124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f45850bfc7eb9db0b4c4a227b97d9fe0d1f99e266d77e9b66fc2797453326c,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727727014612742689,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b7eae086adfa13129e0ee64055dbf5ecef59b6cbb57e8c3f82ec0b37998f6d8,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727727014583572877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727727014507776681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f5919856e7020e2eb736700dcc60faf49bb3549a20d86cecc06833256227d,PodSandboxId:8754efd58ac6fd709d308dbfc7dd062dbaebb39928b4645d4af510e8e3cfbb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727727014457184858,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9458794f1a510009238ae84c24f002abcd8dd8cfe472470a8cefb49c2d1d1ff,PodSandboxId:d6d05abaafe65ae0bf04bf51aef7e61d0aabc4fbc70b020c0d74daa5f0100475,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727727014414000533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77443ff4394cea6f0d035877e1e1513cab12a1648c096fad857654ededda1936,PodSandboxId:fd5726427f3e1d9295403eb0289cc84ce04bd43f38db9bd9ff5c93937cb4bad9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727011060782331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727726553788930169,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414317132948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414250226322,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727726402286751491,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727726402007394795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727726390313458555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727726390230834509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4f78f91-be06-48ad-bdf9-929d721b4ee6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.741024316Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=282b1a62-7a04-41ad-9b9e-affb22fec856 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.741105294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=282b1a62-7a04-41ad-9b9e-affb22fec856 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.742635284Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b319115-4bc2-4cd3-a6df-4db2bc5b2d4c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.743057865Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727345743036712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b319115-4bc2-4cd3-a6df-4db2bc5b2d4c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.743635687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=951a4767-29de-44bc-9721-a2bc195c245d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.743691256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=951a4767-29de-44bc-9721-a2bc195c245d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.748382714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:799d0bb0c993d0ffde3eefbcc05bcb611d96d352cb0ea83e7022f8fbd550dd95,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727727185719881527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985f5a2a7c076eb4cf77a8b507f759819a444134f93b1df5e5932da65c1270e,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727727057730351169,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a945cf678b444c95ced3c0655fedd7e24a271a0269cf64af94ee977600d79ad,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727727056734933159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc285523fdce19c14d398147b8793713be6f6d52049dd8b29d2844a668b82645,PodSandboxId:0f134ad7b95b1f2e96670573b8bb737db2ee057af15552e2fb9e2d5f4e25e29f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727727047994332438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744a1c20ed6c3fe15442d117e472f677d759a07b2075fef70341de56d798d14b,PodSandboxId:bb6065f83dadf08926cabdd5d9999f932c0d8a6d5782ca9efd3b6f505284a827,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727727024835063908,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0b24a252ad7163810aa1bbddc4bc981,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a30cbd3eb0f4ef05c7391f3280d861cd10d0fe8ba20335ae73fcbe214e80a9e,PodSandboxId:ed86ec584c49134727b6ee9b95d6ebf6f92cc75139119cf0ea2b4be83d6df838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727727014815519285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:380bd6e347263a3f2a049efae6b9b292d5c95a687b571ed3542ef7673141a92f,PodSandboxId:4709331fb79f41392654d87d0cbba6850b4edafe1c7c72a0b9cffa363d1c2fb3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727727014834391279,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:6fed1262e64394560fbc057ea4f9f851d03675b41610f8834ec91e719fc78857,PodSandboxId:907a40f61fd35c956014f9d913d24ffce1e777898650629dce7c4a64c1a75eed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727014680588124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f45850bfc7eb9db0b4c4a227b97d9fe0d1f99e266d77e9b66fc2797453326c,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727727014612742689,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b7eae086adfa13129e0ee64055dbf5ecef59b6cbb57e8c3f82ec0b37998f6d8,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727727014583572877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727727014507776681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f5919856e7020e2eb736700dcc60faf49bb3549a20d86cecc06833256227d,PodSandboxId:8754efd58ac6fd709d308dbfc7dd062dbaebb39928b4645d4af510e8e3cfbb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727727014457184858,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9458794f1a510009238ae84c24f002abcd8dd8cfe472470a8cefb49c2d1d1ff,PodSandboxId:d6d05abaafe65ae0bf04bf51aef7e61d0aabc4fbc70b020c0d74daa5f0100475,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727727014414000533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77443ff4394cea6f0d035877e1e1513cab12a1648c096fad857654ededda1936,PodSandboxId:fd5726427f3e1d9295403eb0289cc84ce04bd43f38db9bd9ff5c93937cb4bad9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727011060782331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727726553788930169,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414317132948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414250226322,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727726402286751491,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727726402007394795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727726390313458555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727726390230834509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=951a4767-29de-44bc-9721-a2bc195c245d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.795022929Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a474cfff-4957-4970-8e99-ae0205474494 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.795129808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a474cfff-4957-4970-8e99-ae0205474494 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.796776909Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4811b7a7-aee9-4151-b5a7-05856602f00e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.797235927Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727345797205838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4811b7a7-aee9-4151-b5a7-05856602f00e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.797729231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=290234f7-6744-4335-b45c-97de44aff578 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.797786948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=290234f7-6744-4335-b45c-97de44aff578 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:15:45 ha-805293 crio[3907]: time="2024-09-30 20:15:45.798191510Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:799d0bb0c993d0ffde3eefbcc05bcb611d96d352cb0ea83e7022f8fbd550dd95,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727727185719881527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985f5a2a7c076eb4cf77a8b507f759819a444134f93b1df5e5932da65c1270e,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727727057730351169,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a945cf678b444c95ced3c0655fedd7e24a271a0269cf64af94ee977600d79ad,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727727056734933159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc285523fdce19c14d398147b8793713be6f6d52049dd8b29d2844a668b82645,PodSandboxId:0f134ad7b95b1f2e96670573b8bb737db2ee057af15552e2fb9e2d5f4e25e29f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727727047994332438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744a1c20ed6c3fe15442d117e472f677d759a07b2075fef70341de56d798d14b,PodSandboxId:bb6065f83dadf08926cabdd5d9999f932c0d8a6d5782ca9efd3b6f505284a827,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727727024835063908,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0b24a252ad7163810aa1bbddc4bc981,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a30cbd3eb0f4ef05c7391f3280d861cd10d0fe8ba20335ae73fcbe214e80a9e,PodSandboxId:ed86ec584c49134727b6ee9b95d6ebf6f92cc75139119cf0ea2b4be83d6df838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727727014815519285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:380bd6e347263a3f2a049efae6b9b292d5c95a687b571ed3542ef7673141a92f,PodSandboxId:4709331fb79f41392654d87d0cbba6850b4edafe1c7c72a0b9cffa363d1c2fb3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727727014834391279,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:6fed1262e64394560fbc057ea4f9f851d03675b41610f8834ec91e719fc78857,PodSandboxId:907a40f61fd35c956014f9d913d24ffce1e777898650629dce7c4a64c1a75eed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727014680588124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f45850bfc7eb9db0b4c4a227b97d9fe0d1f99e266d77e9b66fc2797453326c,PodSandboxId:0351a72258f94e7a77ca9f6c12c179269acb125d6b92774ff9c683b58b75c355,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727727014612742689,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e187d2ff3fb002e09fae92363c4994b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b7eae086adfa13129e0ee64055dbf5ecef59b6cbb57e8c3f82ec0b37998f6d8,PodSandboxId:8da7e73e0b2fd4d2dd3548bf5624b712504a6e2ffa74d3126fecba092f15c571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727727014583572877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91de2f71b33d8668e0d24248c5ba505a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2953f6dc095a37336d7b0b5d17fb8ae04ee08ce04f58060304fa5031e60041cc,PodSandboxId:b6eca5d34d418c3897c2f1c73b8bdee9c01ec8e773f446bf95450a7d920e70da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727727014507776681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1912fdf8-d789-4ba9-99ff-c87ccbf330ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f5919856e7020e2eb736700dcc60faf49bb3549a20d86cecc06833256227d,PodSandboxId:8754efd58ac6fd709d308dbfc7dd062dbaebb39928b4645d4af510e8e3cfbb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727727014457184858,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9458794f1a510009238ae84c24f002abcd8dd8cfe472470a8cefb49c2d1d1ff,PodSandboxId:d6d05abaafe65ae0bf04bf51aef7e61d0aabc4fbc70b020c0d74daa5f0100475,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727727014414000533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77443ff4394cea6f0d035877e1e1513cab12a1648c096fad857654ededda1936,PodSandboxId:fd5726427f3e1d9295403eb0289cc84ce04bd43f38db9bd9ff5c93937cb4bad9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727727011060782331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ee59c77c769b646a6f94ef88076d89d99a5138229c27ab2ecd6eedc1ea0137,PodSandboxId:a8d4349f6e0b012cac7cc543f5d04992a9cb1de807eb237411f46582d8c5c540,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727726553788930169,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-r27jf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8969a9ac-4b19-4c72-a07d-bb931c9f8ba7,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b,PodSandboxId:f95d30afc04916b1e5d12f22842ce5b2c3808757f1b92af092bf4ffc63a27c9c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414317132948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7zjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b20ed2-1d94-49b9-ab9e-17e27d1012d0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c,PodSandboxId:626fdaeb1b14215b2ddf9dbe22621f0f7c9f7e0e8db20c7c483269ada15e7512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727726414250226322,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z4bkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ba0288-138e-4690-a68d-6d6378e28deb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa,PodSandboxId:36a3293339cae46a1bb60dbd31ebc0c9cff626b8ef4187cb20f8f6e82df0ea38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727726402286751491,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-slhtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f4d6f0-3dbf-4ca6-8d39-e0708f8b4e88,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088,PodSandboxId:27a0913ae182af410a957f6939c392e8f5c10a9c0f379497c031a233b043fcc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727726402007394795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gnt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90b0c3f-e9c3-4cb9-8773-8253bd72ab51,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463,PodSandboxId:73733467afdd9b699d6d342fb9bbd352b0c4456149f3c46fe331c45d34ebb0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727726390313458555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33fa137f85dfeea3a67cdcccdd92a29,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c,PodSandboxId:bff718c807eb7f31c2fa2ca21cb89a64bfae7b651a8673a1e2659004f3cc16a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727726390230834509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-805293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc042ef6adb6bb0f327bb59cec9a57d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=290234f7-6744-4335-b45c-97de44aff578 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	799d0bb0c993d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       6                   b6eca5d34d418       storage-provisioner
	a985f5a2a7c07       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   8da7e73e0b2fd       kube-controller-manager-ha-805293
	9a945cf678b44       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   0351a72258f94       kube-apiserver-ha-805293
	dc285523fdce1       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   0f134ad7b95b1       busybox-7dff88458-r27jf
	744a1c20ed6c3       18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460                                      5 minutes ago       Running             kube-vip                  0                   bb6065f83dadf       kube-vip-ha-805293
	380bd6e347263       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   4709331fb79f4       kindnet-slhtm
	5a30cbd3eb0f4       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   ed86ec584c491       kube-proxy-6gnt4
	6fed1262e6439       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   907a40f61fd35       coredns-7c65d6cfc9-x7zjp
	e6f45850bfc7e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   0351a72258f94       kube-apiserver-ha-805293
	5b7eae086adfa       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   1                   8da7e73e0b2fd       kube-controller-manager-ha-805293
	2953f6dc095a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       5                   b6eca5d34d418       storage-provisioner
	3b4f5919856e7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   8754efd58ac6f       etcd-ha-805293
	d9458794f1a51       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   d6d05abaafe65       kube-scheduler-ha-805293
	77443ff4394ce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   fd5726427f3e1       coredns-7c65d6cfc9-z4bkv
	10ee59c77c769       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   a8d4349f6e0b0       busybox-7dff88458-r27jf
	8c540e4668f99       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   f95d30afc0491       coredns-7c65d6cfc9-x7zjp
	beba42a2bf035       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   626fdaeb1b142       coredns-7c65d6cfc9-z4bkv
	e28b6781ed449       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   36a3293339cae       kindnet-slhtm
	cd73b6dc43348       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   27a0913ae182a       kube-proxy-6gnt4
	9b8d5baa6998a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      15 minutes ago      Exited              kube-scheduler            0                   73733467afdd9       kube-scheduler-ha-805293
	219dff1c43cd4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      15 minutes ago      Exited              etcd                      0                   bff718c807eb7       etcd-ha-805293
	
	
	==> coredns [6fed1262e64394560fbc057ea4f9f851d03675b41610f8834ec91e719fc78857] <==
	[INFO] plugin/kubernetes: Trace[1977302319]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 20:10:19.661) (total time: 10001ms):
	Trace[1977302319]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:10:29.662)
	Trace[1977302319]: [10.001378695s] [10.001378695s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2032279084]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 20:10:19.885) (total time: 10001ms):
	Trace[2032279084]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:10:29.887)
	Trace[2032279084]: [10.001633399s] [10.001633399s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58874->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58874->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [77443ff4394cea6f0d035877e1e1513cab12a1648c096fad857654ededda1936] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[941344975]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 20:10:18.972) (total time: 10001ms):
	Trace[941344975]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:10:28.973)
	Trace[941344975]: [10.001074456s] [10.001074456s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55510->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55510->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8c540e4668f99b48d3770431ac1f2af8cc27bb02cb7484f76a2e64c054e7d51b] <==
	[INFO] 10.244.1.2:50368 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261008s
	[INFO] 10.244.1.2:34858 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000270623s
	[INFO] 10.244.1.2:59975 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192447s
	[INFO] 10.244.2.2:37486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233576s
	[INFO] 10.244.2.2:40647 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002177996s
	[INFO] 10.244.2.2:39989 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196915s
	[INFO] 10.244.2.2:42105 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001612348s
	[INFO] 10.244.2.2:42498 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180331s
	[INFO] 10.244.2.2:34873 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000262642s
	[INFO] 10.244.0.4:55282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002337707s
	[INFO] 10.244.0.4:52721 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082276s
	[INFO] 10.244.0.4:33773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001975703s
	[INFO] 10.244.0.4:44087 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095899s
	[INFO] 10.244.1.2:44456 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189431s
	[INFO] 10.244.1.2:52532 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112979s
	[INFO] 10.244.1.2:39707 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095712s
	[INFO] 10.244.2.2:42900 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101241s
	[INFO] 10.244.0.4:56608 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134276s
	[INFO] 10.244.1.2:35939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00031266s
	[INFO] 10.244.1.2:48131 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196792s
	[INFO] 10.244.2.2:40732 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154649s
	[INFO] 10.244.0.4:51180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000206094s
	[INFO] 10.244.0.4:36921 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000118718s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [beba42a2bf03538adf6e7a4dbb71260fee4fa21466c2be38d7aa05898ee55f0c] <==
	[INFO] 10.244.1.2:59221 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00021778s
	[INFO] 10.244.1.2:56069 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0044481s
	[INFO] 10.244.1.2:50386 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00023413s
	[INFO] 10.244.2.2:46506 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103313s
	[INFO] 10.244.2.2:41909 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177677s
	[INFO] 10.244.0.4:57981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180642s
	[INFO] 10.244.0.4:42071 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100781s
	[INFO] 10.244.0.4:53066 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079995s
	[INFO] 10.244.0.4:54192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095317s
	[INFO] 10.244.1.2:42705 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147435s
	[INFO] 10.244.2.2:42448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014108s
	[INFO] 10.244.2.2:58687 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152745s
	[INFO] 10.244.2.2:59433 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159734s
	[INFO] 10.244.0.4:34822 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086009s
	[INFO] 10.244.0.4:46188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067594s
	[INFO] 10.244.0.4:33829 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130532s
	[INFO] 10.244.1.2:56575 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000557946s
	[INFO] 10.244.1.2:41726 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145733s
	[INFO] 10.244.2.2:56116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108892s
	[INFO] 10.244.2.2:58958 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075413s
	[INFO] 10.244.2.2:42001 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077659s
	[INFO] 10.244.0.4:53905 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091303s
	[INFO] 10.244.0.4:41906 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000098967s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-805293
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T19_59_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 19:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:15:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:10:59 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:10:59 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:10:59 +0000   Mon, 30 Sep 2024 19:59:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:10:59 +0000   Mon, 30 Sep 2024 20:00:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-805293
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 866f17ca2f8945bb8c8d7336ea64bab7
	  System UUID:                866f17ca-2f89-45bb-8c8d-7336ea64bab7
	  Boot ID:                    688ba3e5-bec7-403a-8a14-d517107abdf5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-r27jf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-x7zjp             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-z4bkv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-805293                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-slhtm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-805293             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-805293    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-6gnt4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-805293             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-805293                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m48s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-805293 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-805293 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-805293 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                    node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-805293 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Warning  ContainerGCFailed        5m50s (x2 over 6m50s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m36s (x3 over 6m25s)  kubelet          Node ha-805293 status is now: NodeNotReady
	  Normal   RegisteredNode           4m55s                  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal   RegisteredNode           4m44s                  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	  Normal   RegisteredNode           3m24s                  node-controller  Node ha-805293 event: Registered Node ha-805293 in Controller
	
	
	Name:               ha-805293-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_00_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:00:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:15:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:11:39 +0000   Mon, 30 Sep 2024 20:10:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:11:39 +0000   Mon, 30 Sep 2024 20:10:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:11:39 +0000   Mon, 30 Sep 2024 20:10:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:11:39 +0000   Mon, 30 Sep 2024 20:10:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-805293-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d0700264de549a1be3f1020308847ab
	  System UUID:                4d070026-4de5-49a1-be3f-1020308847ab
	  Boot ID:                    c2afb042-4941-4000-8a03-eb4543e77620
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lshpm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-805293-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-lfldt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-805293-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-805293-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-vptrg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-805293-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-805293-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m22s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-805293-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-805293-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-805293-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-805293-m02 status is now: NodeNotReady
	  Normal  Starting                 5m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m16s)  kubelet          Node ha-805293-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m16s)  kubelet          Node ha-805293-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m16s)  kubelet          Node ha-805293-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	  Normal  RegisteredNode           3m24s                  node-controller  Node ha-805293-m02 event: Registered Node ha-805293-m02 in Controller
	
	
	Name:               ha-805293-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-805293-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=ha-805293
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_03_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:03:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-805293-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:13:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 30 Sep 2024 20:12:58 +0000   Mon, 30 Sep 2024 20:14:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 30 Sep 2024 20:12:58 +0000   Mon, 30 Sep 2024 20:14:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 30 Sep 2024 20:12:58 +0000   Mon, 30 Sep 2024 20:14:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 30 Sep 2024 20:12:58 +0000   Mon, 30 Sep 2024 20:14:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-805293-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66e464978dbd400d9e13327c67f50978
	  System UUID:                66e46497-8dbd-400d-9e13-327c67f50978
	  Boot ID:                    6e1244f9-7880-4f80-9034-5826420e0122
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ddsls    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kindnet-pk4z9              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-7hn94           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-805293-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-805293-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-805293-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-805293-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m55s                  node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal   RegisteredNode           4m44s                  node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Normal   RegisteredNode           3m24s                  node-controller  Node ha-805293-m04 event: Registered Node ha-805293-m04 in Controller
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-805293-m04 has been rebooted, boot id: 6e1244f9-7880-4f80-9034-5826420e0122
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-805293-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-805293-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-805293-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m48s                  kubelet          Node ha-805293-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s (x2 over 4m15s)   node-controller  Node ha-805293-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.789974] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.062566] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063093] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.202518] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.124623] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.268552] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +3.977529] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.564932] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.062130] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.342874] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.088317] kauditd_printk_skb: 79 callbacks suppressed
	[Sep30 20:00] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.197664] kauditd_printk_skb: 38 callbacks suppressed
	[ +40.392588] kauditd_printk_skb: 26 callbacks suppressed
	[Sep30 20:06] kauditd_printk_skb: 1 callbacks suppressed
	[Sep30 20:10] systemd-fstab-generator[3832]: Ignoring "noauto" option for root device
	[  +0.147186] systemd-fstab-generator[3844]: Ignoring "noauto" option for root device
	[  +0.197988] systemd-fstab-generator[3858]: Ignoring "noauto" option for root device
	[  +0.165734] systemd-fstab-generator[3870]: Ignoring "noauto" option for root device
	[  +0.283923] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[  +3.707715] systemd-fstab-generator[3994]: Ignoring "noauto" option for root device
	[  +3.457916] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.008545] kauditd_printk_skb: 85 callbacks suppressed
	[Sep30 20:11] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [219dff1c43cd4fc52d54d558d6d4c44ac3dc35d4c8b6e3abe0d6b0517d28f22c] <==
	{"level":"info","ts":"2024-09-30T20:08:31.643045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c received MsgPreVoteResp from ac0ce77fb984259c at term 2"}
	{"level":"info","ts":"2024-09-30T20:08:31.643119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c [logterm: 2, index: 2084] sent MsgPreVote request to 2f3ead44f397c7d2 at term 2"}
	{"level":"info","ts":"2024-09-30T20:08:31.643146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c [logterm: 2, index: 2084] sent MsgPreVote request to 5403ce2c8324712e at term 2"}
	{"level":"warn","ts":"2024-09-30T20:08:31.667466Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.3:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:08:31.667521Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.3:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-30T20:08:31.667595Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ac0ce77fb984259c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-30T20:08:31.667809Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.667899Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.668003Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.668224Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.668357Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.668453Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ac0ce77fb984259c","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.668500Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5403ce2c8324712e"}
	{"level":"info","ts":"2024-09-30T20:08:31.668524Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.668551Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.668626Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.668764Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.668835Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.668937Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.669004Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:08:31.672668Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.3:2380"}
	{"level":"warn","ts":"2024-09-30T20:08:31.672687Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.260749104s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-30T20:08:31.672830Z","caller":"traceutil/trace.go:171","msg":"trace[966761164] range","detail":"{range_begin:; range_end:; }","duration":"9.260907346s","start":"2024-09-30T20:08:22.411913Z","end":"2024-09-30T20:08:31.672820Z","steps":["trace[966761164] 'agreement among raft nodes before linearized reading'  (duration: 9.260744943s)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:08:31.672789Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.3:2380"}
	{"level":"info","ts":"2024-09-30T20:08:31.672942Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-805293","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.3:2380"],"advertise-client-urls":["https://192.168.39.3:2379"]}
	
	
	==> etcd [3b4f5919856e7020e2eb736700dcc60faf49bb3549a20d86cecc06833256227d] <==
	{"level":"info","ts":"2024-09-30T20:12:14.617785Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:12:14.642710Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ac0ce77fb984259c","to":"2f3ead44f397c7d2","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-30T20:12:14.642838Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:12:14.643839Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ac0ce77fb984259c","to":"2f3ead44f397c7d2","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-30T20:12:14.643886Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"warn","ts":"2024-09-30T20:13:02.221808Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.966592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-6gnt4\" ","response":"range_response_count:1 size:4887"}
	{"level":"info","ts":"2024-09-30T20:13:02.221932Z","caller":"traceutil/trace.go:171","msg":"trace[681394436] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-6gnt4; range_end:; response_count:1; response_revision:2453; }","duration":"127.14926ms","start":"2024-09-30T20:13:02.094760Z","end":"2024-09-30T20:13:02.221909Z","steps":["trace[681394436] 'range keys from in-memory index tree'  (duration: 126.136004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T20:13:12.227712Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.227:48958","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-30T20:13:12.252061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c switched to configuration voters=(6053909014690165038 12397538410003441052)"}
	{"level":"info","ts":"2024-09-30T20:13:12.254522Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"1d030e9334923ef1","local-member-id":"ac0ce77fb984259c","removed-remote-peer-id":"2f3ead44f397c7d2","removed-remote-peer-urls":["https://192.168.39.227:2380"]}
	{"level":"info","ts":"2024-09-30T20:13:12.254625Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"warn","ts":"2024-09-30T20:13:12.254937Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:13:12.254984Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"warn","ts":"2024-09-30T20:13:12.255578Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:13:12.255622Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:13:12.256050Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"warn","ts":"2024-09-30T20:13:12.256420Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2","error":"context canceled"}
	{"level":"warn","ts":"2024-09-30T20:13:12.256490Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"2f3ead44f397c7d2","error":"failed to read 2f3ead44f397c7d2 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-30T20:13:12.256526Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"warn","ts":"2024-09-30T20:13:12.256715Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2","error":"context canceled"}
	{"level":"info","ts":"2024-09-30T20:13:12.256771Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ac0ce77fb984259c","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:13:12.256794Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"info","ts":"2024-09-30T20:13:12.256815Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"ac0ce77fb984259c","removed-remote-peer-id":"2f3ead44f397c7d2"}
	{"level":"warn","ts":"2024-09-30T20:13:12.270438Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"ac0ce77fb984259c","remote-peer-id-stream-handler":"ac0ce77fb984259c","remote-peer-id-from":"2f3ead44f397c7d2"}
	{"level":"warn","ts":"2024-09-30T20:13:12.273596Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"ac0ce77fb984259c","remote-peer-id-stream-handler":"ac0ce77fb984259c","remote-peer-id-from":"2f3ead44f397c7d2"}
	
	
	==> kernel <==
	 20:15:46 up 16 min,  0 users,  load average: 0.80, 0.56, 0.35
	Linux ha-805293 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [380bd6e347263a3f2a049efae6b9b292d5c95a687b571ed3542ef7673141a92f] <==
	I0930 20:15:06.013029       1 main.go:299] handling current node
	I0930 20:15:16.007771       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:15:16.007944       1 main.go:299] handling current node
	I0930 20:15:16.007987       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:15:16.008006       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:15:16.008155       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:15:16.008177       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:15:26.013817       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:15:26.014007       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:15:26.014185       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:15:26.014211       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:15:26.014277       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:15:26.014358       1 main.go:299] handling current node
	I0930 20:15:36.015783       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:15:36.015888       1 main.go:299] handling current node
	I0930 20:15:36.015949       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:15:36.015956       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:15:36.016099       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:15:36.016120       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:15:46.016878       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:15:46.016920       1 main.go:299] handling current node
	I0930 20:15:46.016938       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:15:46.016942       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:15:46.017107       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:15:46.017112       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e28b6781ed449accffbc99ebf7b7a45d0e016ee9212f8826e5bed2775f45e1aa] <==
	I0930 20:08:03.352100       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:08:03.352205       1 main.go:299] handling current node
	I0930 20:08:03.352219       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:08:03.352225       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:08:03.352431       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:08:03.352440       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:08:03.352495       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:08:03.352501       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:08:13.352708       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:08:13.352903       1 main.go:299] handling current node
	I0930 20:08:13.352944       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:08:13.353013       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:08:13.353414       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:08:13.353493       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:08:13.353612       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:08:13.353635       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	I0930 20:08:23.353816       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 20:08:23.353887       1 main.go:299] handling current node
	I0930 20:08:23.353919       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0930 20:08:23.353928       1 main.go:322] Node ha-805293-m02 has CIDR [10.244.1.0/24] 
	I0930 20:08:23.354115       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0930 20:08:23.354139       1 main.go:322] Node ha-805293-m03 has CIDR [10.244.2.0/24] 
	I0930 20:08:23.354197       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0930 20:08:23.354215       1 main.go:322] Node ha-805293-m04 has CIDR [10.244.3.0/24] 
	E0930 20:08:29.411634       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	
	
	==> kube-apiserver [9a945cf678b444c95ced3c0655fedd7e24a271a0269cf64af94ee977600d79ad] <==
	I0930 20:10:59.076227       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0930 20:10:59.076438       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0930 20:10:59.160065       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0930 20:10:59.160891       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 20:10:59.161158       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 20:10:59.162207       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 20:10:59.162339       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0930 20:10:59.162388       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 20:10:59.171454       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0930 20:10:59.171624       1 aggregator.go:171] initial CRD sync complete...
	I0930 20:10:59.171681       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 20:10:59.171713       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 20:10:59.171742       1 cache.go:39] Caches are synced for autoregister controller
	I0930 20:10:59.173006       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0930 20:10:59.203189       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 20:10:59.214077       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 20:10:59.214165       1 policy_source.go:224] refreshing policies
	I0930 20:10:59.253464       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0930 20:10:59.260635       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0930 20:10:59.268525       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.227]
	I0930 20:10:59.270239       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 20:10:59.281267       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0930 20:10:59.285163       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0930 20:11:00.060521       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0930 20:11:00.500848       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.3]
	
	
	==> kube-apiserver [e6f45850bfc7eb9db0b4c4a227b97d9fe0d1f99e266d77e9b66fc2797453326c] <==
	I0930 20:10:15.301711       1 options.go:228] external host was not specified, using 192.168.39.3
	I0930 20:10:15.316587       1 server.go:142] Version: v1.31.1
	I0930 20:10:15.316643       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:10:16.308403       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 20:10:16.312615       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0930 20:10:16.314558       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0930 20:10:16.314587       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0930 20:10:16.315076       1 instance.go:232] Using reconciler: lease
	W0930 20:10:36.296616       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0930 20:10:36.313188       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0930 20:10:36.316574       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	W0930 20:10:36.316595       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-controller-manager [5b7eae086adfa13129e0ee64055dbf5ecef59b6cbb57e8c3f82ec0b37998f6d8] <==
	I0930 20:10:16.966839       1 serving.go:386] Generated self-signed cert in-memory
	I0930 20:10:17.241608       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0930 20:10:17.241647       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:10:17.243087       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0930 20:10:17.243204       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0930 20:10:17.243451       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0930 20:10:17.243812       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0930 20:10:37.321872       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.3:8443/healthz\": dial tcp 192.168.39.3:8443: connect: connection refused"
	
	
	==> kube-controller-manager [a985f5a2a7c076eb4cf77a8b507f759819a444134f93b1df5e5932da65c1270e] <==
	I0930 20:13:09.172262       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.132379ms"
	I0930 20:13:09.172552       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="180.164µs"
	I0930 20:13:11.063367       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="120.044µs"
	I0930 20:13:11.760258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.981µs"
	I0930 20:13:11.766716       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.418µs"
	I0930 20:13:13.228359       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.819046ms"
	I0930 20:13:13.228647       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.957µs"
	I0930 20:13:23.326621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m03"
	I0930 20:13:23.326834       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-805293-m04"
	E0930 20:13:42.460427       1 gc_controller.go:151] "Failed to get node" err="node \"ha-805293-m03\" not found" logger="pod-garbage-collector-controller" node="ha-805293-m03"
	E0930 20:13:42.460478       1 gc_controller.go:151] "Failed to get node" err="node \"ha-805293-m03\" not found" logger="pod-garbage-collector-controller" node="ha-805293-m03"
	E0930 20:13:42.460511       1 gc_controller.go:151] "Failed to get node" err="node \"ha-805293-m03\" not found" logger="pod-garbage-collector-controller" node="ha-805293-m03"
	E0930 20:13:42.460518       1 gc_controller.go:151] "Failed to get node" err="node \"ha-805293-m03\" not found" logger="pod-garbage-collector-controller" node="ha-805293-m03"
	E0930 20:13:42.460534       1 gc_controller.go:151] "Failed to get node" err="node \"ha-805293-m03\" not found" logger="pod-garbage-collector-controller" node="ha-805293-m03"
	I0930 20:14:01.749559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:14:01.772891       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:14:01.831482       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.746885ms"
	I0930 20:14:01.831729       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="100.238µs"
	E0930 20:14:02.461577       1 gc_controller.go:151] "Failed to get node" err="node \"ha-805293-m03\" not found" logger="pod-garbage-collector-controller" node="ha-805293-m03"
	E0930 20:14:02.461634       1 gc_controller.go:151] "Failed to get node" err="node \"ha-805293-m03\" not found" logger="pod-garbage-collector-controller" node="ha-805293-m03"
	E0930 20:14:02.461644       1 gc_controller.go:151] "Failed to get node" err="node \"ha-805293-m03\" not found" logger="pod-garbage-collector-controller" node="ha-805293-m03"
	E0930 20:14:02.461651       1 gc_controller.go:151] "Failed to get node" err="node \"ha-805293-m03\" not found" logger="pod-garbage-collector-controller" node="ha-805293-m03"
	E0930 20:14:02.461659       1 gc_controller.go:151] "Failed to get node" err="node \"ha-805293-m03\" not found" logger="pod-garbage-collector-controller" node="ha-805293-m03"
	I0930 20:14:02.601830       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	I0930 20:14:06.832178       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-805293-m04"
	
	
	==> kube-proxy [5a30cbd3eb0f4ef05c7391f3280d861cd10d0fe8ba20335ae73fcbe214e80a9e] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:10:18.595797       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-805293\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 20:10:21.668229       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-805293\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 20:10:24.738810       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-805293\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 20:10:30.883624       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-805293\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 20:10:40.099646       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-805293\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0930 20:10:57.356484       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.3"]
	E0930 20:10:57.361677       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:10:57.398142       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:10:57.398234       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:10:57.398275       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:10:57.400593       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:10:57.400913       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:10:57.401088       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:10:57.402779       1 config.go:199] "Starting service config controller"
	I0930 20:10:57.402864       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:10:57.402910       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:10:57.402937       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:10:57.403632       1 config.go:328] "Starting node config controller"
	I0930 20:10:57.403687       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:10:57.504385       1 shared_informer.go:320] Caches are synced for node config
	I0930 20:10:57.504477       1 shared_informer.go:320] Caches are synced for service config
	I0930 20:10:57.504488       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [cd73b6dc4334857d143f06eb9ac8c7e12ee73639e61cdc17666fe96fa1f26088] <==
	E0930 20:07:20.099660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:20.099833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:20.099872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:26.565918       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:26.566172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:26.566731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:26.566999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:26.566564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:26.567212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:35.780809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713": dial tcp 192.168.39.254:8443: connect: no route to host
	W0930 20:07:35.780915       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:35.780961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0930 20:07:35.781010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:38.851644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:38.851765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:51.139100       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:51.139211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-805293&resourceVersion=1713\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:51.139556       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:51.139637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:07:54.211818       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:07:54.211893       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:08:24.930850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:08:24.931045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 20:08:31.076856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 20:08:31.077089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1755\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [9b8d5baa6998a0be0b18a6a4e4875ca35e5680ddc8f43b93ae94d8160a072463] <==
	W0930 19:59:54.769876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 19:59:54.770087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 19:59:56.900381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 20:02:01.539050       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-h6pvg\": pod kube-proxy-h6pvg is already assigned to node \"ha-805293-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-h6pvg" node="ha-805293-m03"
	E0930 20:02:01.539424       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9860392c-eca6-4200-9b6e-f0a6f51b523b(kube-system/kube-proxy-h6pvg) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-h6pvg"
	E0930 20:02:01.539482       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-h6pvg\": pod kube-proxy-h6pvg is already assigned to node \"ha-805293-m03\"" pod="kube-system/kube-proxy-h6pvg"
	I0930 20:02:01.539558       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-h6pvg" node="ha-805293-m03"
	E0930 20:02:29.833811       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lshpm\": pod busybox-7dff88458-lshpm is already assigned to node \"ha-805293-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lshpm" node="ha-805293-m02"
	E0930 20:02:29.833910       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lshpm\": pod busybox-7dff88458-lshpm is already assigned to node \"ha-805293-m02\"" pod="default/busybox-7dff88458-lshpm"
	E0930 20:08:16.746057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0930 20:08:20.006558       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0930 20:08:20.376984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0930 20:08:20.532839       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0930 20:08:21.975983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0930 20:08:22.493855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0930 20:08:24.078452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0930 20:08:25.517228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0930 20:08:25.521965       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0930 20:08:26.124779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0930 20:08:26.396541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0930 20:08:27.181371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0930 20:08:28.995877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0930 20:08:29.133144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0930 20:08:29.550636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0930 20:08:31.492740       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d9458794f1a510009238ae84c24f002abcd8dd8cfe472470a8cefb49c2d1d1ff] <==
	W0930 20:10:53.885193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.3:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:53.885390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.3:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:54.584840       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.3:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:54.584919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.3:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:55.401838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.3:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:55.401971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.3:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:55.453229       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.3:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:55.453372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.3:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:55.516878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.3:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:55.517000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.3:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:56.271412       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.3:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:56.271472       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.3:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:56.676785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.3:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.3:8443: connect: connection refused
	E0930 20:10:56.676844       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.3:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.3:8443: connect: connection refused" logger="UnhandledError"
	W0930 20:10:59.084595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 20:10:59.084767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:10:59.085142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 20:10:59.085229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:10:59.093841       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 20:10:59.093973       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0930 20:11:16.302416       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 20:13:08.983154       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ddsls\": pod busybox-7dff88458-ddsls is already assigned to node \"ha-805293-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ddsls" node="ha-805293-m04"
	E0930 20:13:08.983342       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e9a6f6ee-f1ec-449b-acce-95177af6ab56(default/busybox-7dff88458-ddsls) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-ddsls"
	E0930 20:13:08.983385       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ddsls\": pod busybox-7dff88458-ddsls is already assigned to node \"ha-805293-m04\"" pod="default/busybox-7dff88458-ddsls"
	I0930 20:13:08.983414       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-ddsls" node="ha-805293-m04"
	
	
	==> kubelet <==
	Sep 30 20:14:06 ha-805293 kubelet[1307]: E0930 20:14:06.977728    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727246977275170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:14:06 ha-805293 kubelet[1307]: E0930 20:14:06.977773    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727246977275170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:14:16 ha-805293 kubelet[1307]: E0930 20:14:16.980268    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727256979837659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:14:16 ha-805293 kubelet[1307]: E0930 20:14:16.980581    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727256979837659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:14:26 ha-805293 kubelet[1307]: E0930 20:14:26.982939    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727266982471882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:14:26 ha-805293 kubelet[1307]: E0930 20:14:26.983055    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727266982471882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:14:36 ha-805293 kubelet[1307]: E0930 20:14:36.985143    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727276984817978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:14:36 ha-805293 kubelet[1307]: E0930 20:14:36.985583    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727276984817978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:14:46 ha-805293 kubelet[1307]: E0930 20:14:46.988432    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727286987753655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:14:46 ha-805293 kubelet[1307]: E0930 20:14:46.988494    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727286987753655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:14:56 ha-805293 kubelet[1307]: E0930 20:14:56.734191    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 20:14:56 ha-805293 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 20:14:56 ha-805293 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 20:14:56 ha-805293 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 20:14:56 ha-805293 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 20:14:56 ha-805293 kubelet[1307]: E0930 20:14:56.990539    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727296990009161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:14:56 ha-805293 kubelet[1307]: E0930 20:14:56.990599    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727296990009161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:15:06 ha-805293 kubelet[1307]: E0930 20:15:06.992810    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727306992437916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:15:06 ha-805293 kubelet[1307]: E0930 20:15:06.993092    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727306992437916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:15:16 ha-805293 kubelet[1307]: E0930 20:15:16.994763    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727316994335391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:15:16 ha-805293 kubelet[1307]: E0930 20:15:16.995055    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727316994335391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:15:26 ha-805293 kubelet[1307]: E0930 20:15:26.997738    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727326996990176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:15:26 ha-805293 kubelet[1307]: E0930 20:15:26.997766    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727326996990176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:15:37 ha-805293 kubelet[1307]: E0930 20:15:37.000134    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727336999705333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:15:37 ha-805293 kubelet[1307]: E0930 20:15:37.000170    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727727336999705333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 20:15:45.329057   34480 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19736-7672/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-805293 -n ha-805293
helpers_test.go:261: (dbg) Run:  kubectl --context ha-805293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-103579
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-103579
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-103579: exit status 82 (2m1.777593194s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-103579-m03"  ...
	* Stopping node "multinode-103579-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-103579" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103579 --wait=true -v=8 --alsologtostderr
E0930 20:33:28.938839   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:35:55.312119   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-103579 --wait=true -v=8 --alsologtostderr: (3m20.723817791s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-103579
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-103579 -n multinode-103579
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-103579 logs -n 25: (1.601483092s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-103579 cp multinode-103579-m02:/home/docker/cp-test.txt                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3699104417/001/cp-test_multinode-103579-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-103579 cp multinode-103579-m02:/home/docker/cp-test.txt                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579:/home/docker/cp-test_multinode-103579-m02_multinode-103579.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n multinode-103579 sudo cat                                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | /home/docker/cp-test_multinode-103579-m02_multinode-103579.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-103579 cp multinode-103579-m02:/home/docker/cp-test.txt                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03:/home/docker/cp-test_multinode-103579-m02_multinode-103579-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n multinode-103579-m03 sudo cat                                   | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | /home/docker/cp-test_multinode-103579-m02_multinode-103579-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-103579 cp testdata/cp-test.txt                                                | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-103579 cp multinode-103579-m03:/home/docker/cp-test.txt                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3699104417/001/cp-test_multinode-103579-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-103579 cp multinode-103579-m03:/home/docker/cp-test.txt                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579:/home/docker/cp-test_multinode-103579-m03_multinode-103579.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n multinode-103579 sudo cat                                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | /home/docker/cp-test_multinode-103579-m03_multinode-103579.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-103579 cp multinode-103579-m03:/home/docker/cp-test.txt                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m02:/home/docker/cp-test_multinode-103579-m03_multinode-103579-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n multinode-103579-m02 sudo cat                                   | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | /home/docker/cp-test_multinode-103579-m03_multinode-103579-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-103579 node stop m03                                                          | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	| node    | multinode-103579 node start                                                             | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:31 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-103579                                                                | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:31 UTC |                     |
	| stop    | -p multinode-103579                                                                     | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:31 UTC |                     |
	| start   | -p multinode-103579                                                                     | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:33 UTC | 30 Sep 24 20:36 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-103579                                                                | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:36 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 20:33:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 20:33:05.104361   44409 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:33:05.104647   44409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:33:05.104657   44409 out.go:358] Setting ErrFile to fd 2...
	I0930 20:33:05.104672   44409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:33:05.104864   44409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:33:05.105483   44409 out.go:352] Setting JSON to false
	I0930 20:33:05.106393   44409 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4528,"bootTime":1727723857,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 20:33:05.106497   44409 start.go:139] virtualization: kvm guest
	I0930 20:33:05.108497   44409 out.go:177] * [multinode-103579] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 20:33:05.109887   44409 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 20:33:05.109902   44409 notify.go:220] Checking for updates...
	I0930 20:33:05.112146   44409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 20:33:05.113418   44409 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:33:05.114662   44409 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:33:05.115918   44409 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 20:33:05.117214   44409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 20:33:05.118865   44409 config.go:182] Loaded profile config "multinode-103579": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:33:05.118983   44409 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 20:33:05.119481   44409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:33:05.119558   44409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:33:05.136881   44409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42787
	I0930 20:33:05.137331   44409 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:33:05.137961   44409 main.go:141] libmachine: Using API Version  1
	I0930 20:33:05.137987   44409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:33:05.138379   44409 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:33:05.138726   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:33:05.176162   44409 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 20:33:05.177421   44409 start.go:297] selected driver: kvm2
	I0930 20:33:05.177441   44409 start.go:901] validating driver "kvm2" against &{Name:multinode-103579 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-103579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:33:05.177598   44409 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 20:33:05.177978   44409 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:33:05.178076   44409 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 20:33:05.193853   44409 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 20:33:05.194577   44409 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:33:05.194617   44409 cni.go:84] Creating CNI manager for ""
	I0930 20:33:05.194678   44409 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0930 20:33:05.194755   44409 start.go:340] cluster config:
	{Name:multinode-103579 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-103579 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:33:05.194901   44409 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:33:05.197678   44409 out.go:177] * Starting "multinode-103579" primary control-plane node in "multinode-103579" cluster
	I0930 20:33:05.199048   44409 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:33:05.199118   44409 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 20:33:05.199135   44409 cache.go:56] Caching tarball of preloaded images
	I0930 20:33:05.199236   44409 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:33:05.199248   44409 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:33:05.199390   44409 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/config.json ...
	I0930 20:33:05.199695   44409 start.go:360] acquireMachinesLock for multinode-103579: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:33:05.199751   44409 start.go:364] duration metric: took 33.009µs to acquireMachinesLock for "multinode-103579"
	I0930 20:33:05.199771   44409 start.go:96] Skipping create...Using existing machine configuration
	I0930 20:33:05.199784   44409 fix.go:54] fixHost starting: 
	I0930 20:33:05.200049   44409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:33:05.200087   44409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:33:05.215139   44409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44047
	I0930 20:33:05.215730   44409 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:33:05.216220   44409 main.go:141] libmachine: Using API Version  1
	I0930 20:33:05.216240   44409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:33:05.216565   44409 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:33:05.216733   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:33:05.216884   44409 main.go:141] libmachine: (multinode-103579) Calling .GetState
	I0930 20:33:05.218546   44409 fix.go:112] recreateIfNeeded on multinode-103579: state=Running err=<nil>
	W0930 20:33:05.218583   44409 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 20:33:05.220631   44409 out.go:177] * Updating the running kvm2 "multinode-103579" VM ...
	I0930 20:33:05.221885   44409 machine.go:93] provisionDockerMachine start ...
	I0930 20:33:05.221908   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:33:05.222155   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:33:05.224995   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.225535   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.225572   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.225703   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:33:05.225871   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.226006   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.226128   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:33:05.226254   44409 main.go:141] libmachine: Using SSH client type: native
	I0930 20:33:05.226477   44409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0930 20:33:05.226487   44409 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 20:33:05.344387   44409 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-103579
	
	I0930 20:33:05.344421   44409 main.go:141] libmachine: (multinode-103579) Calling .GetMachineName
	I0930 20:33:05.344688   44409 buildroot.go:166] provisioning hostname "multinode-103579"
	I0930 20:33:05.344716   44409 main.go:141] libmachine: (multinode-103579) Calling .GetMachineName
	I0930 20:33:05.344903   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:33:05.347576   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.347952   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.347978   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.348116   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:33:05.348288   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.348414   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.348527   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:33:05.348700   44409 main.go:141] libmachine: Using SSH client type: native
	I0930 20:33:05.348905   44409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0930 20:33:05.348922   44409 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-103579 && echo "multinode-103579" | sudo tee /etc/hostname
	I0930 20:33:05.480794   44409 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-103579
	
	I0930 20:33:05.480825   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:33:05.483629   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.484143   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.484186   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.484398   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:33:05.484598   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.484847   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.484985   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:33:05.485168   44409 main.go:141] libmachine: Using SSH client type: native
	I0930 20:33:05.485338   44409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0930 20:33:05.485354   44409 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-103579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-103579/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-103579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:33:05.600359   44409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:33:05.600394   44409 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:33:05.600446   44409 buildroot.go:174] setting up certificates
	I0930 20:33:05.600456   44409 provision.go:84] configureAuth start
	I0930 20:33:05.600466   44409 main.go:141] libmachine: (multinode-103579) Calling .GetMachineName
	I0930 20:33:05.600743   44409 main.go:141] libmachine: (multinode-103579) Calling .GetIP
	I0930 20:33:05.603593   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.603961   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.603988   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.604096   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:33:05.606462   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.606831   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.606856   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.606968   44409 provision.go:143] copyHostCerts
	I0930 20:33:05.606993   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:33:05.607033   44409 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:33:05.607044   44409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:33:05.607108   44409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:33:05.607213   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:33:05.607232   44409 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:33:05.607236   44409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:33:05.607259   44409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:33:05.607320   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:33:05.607337   44409 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:33:05.607341   44409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:33:05.607361   44409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:33:05.607418   44409 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.multinode-103579 san=[127.0.0.1 192.168.39.58 localhost minikube multinode-103579]
	I0930 20:33:05.765715   44409 provision.go:177] copyRemoteCerts
	I0930 20:33:05.765773   44409 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:33:05.765804   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:33:05.768448   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.768879   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.768913   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.769080   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:33:05.769278   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.769451   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:33:05.769594   44409 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/multinode-103579/id_rsa Username:docker}
	I0930 20:33:05.861441   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 20:33:05.861527   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:33:05.886209   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 20:33:05.886291   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0930 20:33:05.910367   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 20:33:05.910431   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 20:33:05.936743   44409 provision.go:87] duration metric: took 336.274216ms to configureAuth
	I0930 20:33:05.936780   44409 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:33:05.937031   44409 config.go:182] Loaded profile config "multinode-103579": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:33:05.937122   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:33:05.940214   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.940626   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.940658   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.940887   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:33:05.941086   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.941230   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.941482   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:33:05.941672   44409 main.go:141] libmachine: Using SSH client type: native
	I0930 20:33:05.941836   44409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0930 20:33:05.941851   44409 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:34:36.574929   44409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:34:36.574967   44409 machine.go:96] duration metric: took 1m31.353066244s to provisionDockerMachine
	I0930 20:34:36.574986   44409 start.go:293] postStartSetup for "multinode-103579" (driver="kvm2")
	I0930 20:34:36.574997   44409 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:34:36.575012   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:34:36.575411   44409 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:34:36.575443   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:34:36.578639   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.579053   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:34:36.579077   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.579252   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:34:36.579437   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:34:36.579655   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:34:36.579801   44409 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/multinode-103579/id_rsa Username:docker}
	I0930 20:34:36.667514   44409 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:34:36.671899   44409 command_runner.go:130] > NAME=Buildroot
	I0930 20:34:36.671924   44409 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0930 20:34:36.671929   44409 command_runner.go:130] > ID=buildroot
	I0930 20:34:36.671935   44409 command_runner.go:130] > VERSION_ID=2023.02.9
	I0930 20:34:36.671940   44409 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0930 20:34:36.671993   44409 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:34:36.672008   44409 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:34:36.672072   44409 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:34:36.672148   44409 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:34:36.672156   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 20:34:36.672270   44409 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:34:36.681703   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:34:36.704771   44409 start.go:296] duration metric: took 129.768933ms for postStartSetup
	I0930 20:34:36.704824   44409 fix.go:56] duration metric: took 1m31.505040857s for fixHost
	I0930 20:34:36.704848   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:34:36.708051   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.708484   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:34:36.708523   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.708748   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:34:36.708940   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:34:36.709171   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:34:36.709385   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:34:36.709632   44409 main.go:141] libmachine: Using SSH client type: native
	I0930 20:34:36.709801   44409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0930 20:34:36.709812   44409 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:34:36.824369   44409 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727728476.800338586
	
	I0930 20:34:36.824400   44409 fix.go:216] guest clock: 1727728476.800338586
	I0930 20:34:36.824410   44409 fix.go:229] Guest: 2024-09-30 20:34:36.800338586 +0000 UTC Remote: 2024-09-30 20:34:36.704829654 +0000 UTC m=+91.637775823 (delta=95.508932ms)
	I0930 20:34:36.824479   44409 fix.go:200] guest clock delta is within tolerance: 95.508932ms
	I0930 20:34:36.824487   44409 start.go:83] releasing machines lock for "multinode-103579", held for 1m31.624722762s
	I0930 20:34:36.824517   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:34:36.824824   44409 main.go:141] libmachine: (multinode-103579) Calling .GetIP
	I0930 20:34:36.827320   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.827767   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:34:36.827797   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.827983   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:34:36.828568   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:34:36.828747   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:34:36.828813   44409 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:34:36.828878   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:34:36.828970   44409 ssh_runner.go:195] Run: cat /version.json
	I0930 20:34:36.828987   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:34:36.831925   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.831951   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.832378   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:34:36.832429   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.832483   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:34:36.832516   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.832558   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:34:36.832712   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:34:36.832780   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:34:36.832873   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:34:36.832912   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:34:36.833006   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:34:36.833077   44409 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/multinode-103579/id_rsa Username:docker}
	I0930 20:34:36.833242   44409 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/multinode-103579/id_rsa Username:docker}
	I0930 20:34:36.913257   44409 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0930 20:34:36.913482   44409 ssh_runner.go:195] Run: systemctl --version
	I0930 20:34:36.955679   44409 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0930 20:34:36.955770   44409 command_runner.go:130] > systemd 252 (252)
	I0930 20:34:36.955806   44409 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0930 20:34:36.955904   44409 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:34:37.123145   44409 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0930 20:34:37.129454   44409 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0930 20:34:37.129509   44409 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:34:37.129579   44409 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:34:37.140093   44409 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 20:34:37.140123   44409 start.go:495] detecting cgroup driver to use...
	I0930 20:34:37.140210   44409 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:34:37.158154   44409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:34:37.173190   44409 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:34:37.173259   44409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:34:37.188312   44409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:34:37.203120   44409 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:34:37.349033   44409 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:34:37.498594   44409 docker.go:233] disabling docker service ...
	I0930 20:34:37.498675   44409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:34:37.516391   44409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:34:37.530731   44409 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:34:37.676512   44409 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:34:37.817443   44409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:34:37.831261   44409 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:34:37.850195   44409 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0930 20:34:37.850741   44409 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:34:37.850810   44409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.861283   44409 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:34:37.861365   44409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.871704   44409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.882926   44409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.893404   44409 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:34:37.904336   44409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.914593   44409 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.926919   44409 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.937582   44409 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:34:37.947470   44409 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0930 20:34:37.947591   44409 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:34:37.957286   44409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:34:38.091891   44409 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:34:38.291918   44409 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:34:38.291990   44409 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:34:38.296787   44409 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0930 20:34:38.296810   44409 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0930 20:34:38.296816   44409 command_runner.go:130] > Device: 0,22	Inode: 1338        Links: 1
	I0930 20:34:38.296823   44409 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0930 20:34:38.296831   44409 command_runner.go:130] > Access: 2024-09-30 20:34:38.159831106 +0000
	I0930 20:34:38.296840   44409 command_runner.go:130] > Modify: 2024-09-30 20:34:38.159831106 +0000
	I0930 20:34:38.296848   44409 command_runner.go:130] > Change: 2024-09-30 20:34:38.159831106 +0000
	I0930 20:34:38.296852   44409 command_runner.go:130] >  Birth: -
	I0930 20:34:38.296881   44409 start.go:563] Will wait 60s for crictl version
	I0930 20:34:38.296931   44409 ssh_runner.go:195] Run: which crictl
	I0930 20:34:38.301146   44409 command_runner.go:130] > /usr/bin/crictl
	I0930 20:34:38.301226   44409 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:34:38.342412   44409 command_runner.go:130] > Version:  0.1.0
	I0930 20:34:38.342436   44409 command_runner.go:130] > RuntimeName:  cri-o
	I0930 20:34:38.342442   44409 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0930 20:34:38.342450   44409 command_runner.go:130] > RuntimeApiVersion:  v1
	I0930 20:34:38.342585   44409 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:34:38.342664   44409 ssh_runner.go:195] Run: crio --version
	I0930 20:34:38.371124   44409 command_runner.go:130] > crio version 1.29.1
	I0930 20:34:38.371153   44409 command_runner.go:130] > Version:        1.29.1
	I0930 20:34:38.371162   44409 command_runner.go:130] > GitCommit:      unknown
	I0930 20:34:38.371167   44409 command_runner.go:130] > GitCommitDate:  unknown
	I0930 20:34:38.371173   44409 command_runner.go:130] > GitTreeState:   clean
	I0930 20:34:38.371180   44409 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0930 20:34:38.371186   44409 command_runner.go:130] > GoVersion:      go1.21.6
	I0930 20:34:38.371191   44409 command_runner.go:130] > Compiler:       gc
	I0930 20:34:38.371198   44409 command_runner.go:130] > Platform:       linux/amd64
	I0930 20:34:38.371206   44409 command_runner.go:130] > Linkmode:       dynamic
	I0930 20:34:38.371215   44409 command_runner.go:130] > BuildTags:      
	I0930 20:34:38.371224   44409 command_runner.go:130] >   containers_image_ostree_stub
	I0930 20:34:38.371234   44409 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0930 20:34:38.371242   44409 command_runner.go:130] >   btrfs_noversion
	I0930 20:34:38.371251   44409 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0930 20:34:38.371260   44409 command_runner.go:130] >   libdm_no_deferred_remove
	I0930 20:34:38.371269   44409 command_runner.go:130] >   seccomp
	I0930 20:34:38.371283   44409 command_runner.go:130] > LDFlags:          unknown
	I0930 20:34:38.371338   44409 command_runner.go:130] > SeccompEnabled:   true
	I0930 20:34:38.371364   44409 command_runner.go:130] > AppArmorEnabled:  false
	I0930 20:34:38.371443   44409 ssh_runner.go:195] Run: crio --version
	I0930 20:34:38.400730   44409 command_runner.go:130] > crio version 1.29.1
	I0930 20:34:38.400751   44409 command_runner.go:130] > Version:        1.29.1
	I0930 20:34:38.400763   44409 command_runner.go:130] > GitCommit:      unknown
	I0930 20:34:38.400767   44409 command_runner.go:130] > GitCommitDate:  unknown
	I0930 20:34:38.400770   44409 command_runner.go:130] > GitTreeState:   clean
	I0930 20:34:38.400776   44409 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0930 20:34:38.400780   44409 command_runner.go:130] > GoVersion:      go1.21.6
	I0930 20:34:38.400784   44409 command_runner.go:130] > Compiler:       gc
	I0930 20:34:38.400788   44409 command_runner.go:130] > Platform:       linux/amd64
	I0930 20:34:38.400795   44409 command_runner.go:130] > Linkmode:       dynamic
	I0930 20:34:38.400799   44409 command_runner.go:130] > BuildTags:      
	I0930 20:34:38.400804   44409 command_runner.go:130] >   containers_image_ostree_stub
	I0930 20:34:38.400808   44409 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0930 20:34:38.400813   44409 command_runner.go:130] >   btrfs_noversion
	I0930 20:34:38.400820   44409 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0930 20:34:38.400830   44409 command_runner.go:130] >   libdm_no_deferred_remove
	I0930 20:34:38.400836   44409 command_runner.go:130] >   seccomp
	I0930 20:34:38.400847   44409 command_runner.go:130] > LDFlags:          unknown
	I0930 20:34:38.400854   44409 command_runner.go:130] > SeccompEnabled:   true
	I0930 20:34:38.400861   44409 command_runner.go:130] > AppArmorEnabled:  false
	I0930 20:34:38.403170   44409 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:34:38.404656   44409 main.go:141] libmachine: (multinode-103579) Calling .GetIP
	I0930 20:34:38.407302   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:38.407661   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:34:38.407692   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:38.407932   44409 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:34:38.412262   44409 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0930 20:34:38.412395   44409 kubeadm.go:883] updating cluster {Name:multinode-103579 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-103579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 20:34:38.412529   44409 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:34:38.412577   44409 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:34:38.454352   44409 command_runner.go:130] > {
	I0930 20:34:38.454376   44409 command_runner.go:130] >   "images": [
	I0930 20:34:38.454382   44409 command_runner.go:130] >     {
	I0930 20:34:38.454392   44409 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0930 20:34:38.454399   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.454407   44409 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0930 20:34:38.454414   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454419   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.454446   44409 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0930 20:34:38.454457   44409 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0930 20:34:38.454489   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454501   44409 command_runner.go:130] >       "size": "87190579",
	I0930 20:34:38.454507   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.454513   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.454525   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.454534   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.454553   44409 command_runner.go:130] >     },
	I0930 20:34:38.454562   44409 command_runner.go:130] >     {
	I0930 20:34:38.454571   44409 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0930 20:34:38.454578   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.454586   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0930 20:34:38.454595   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454604   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.454616   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0930 20:34:38.454632   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0930 20:34:38.454642   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454651   44409 command_runner.go:130] >       "size": "1363676",
	I0930 20:34:38.454660   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.454675   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.454684   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.454696   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.454704   44409 command_runner.go:130] >     },
	I0930 20:34:38.454709   44409 command_runner.go:130] >     {
	I0930 20:34:38.454720   44409 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0930 20:34:38.454729   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.454738   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0930 20:34:38.454746   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454755   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.454771   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0930 20:34:38.454788   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0930 20:34:38.454797   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454806   44409 command_runner.go:130] >       "size": "31470524",
	I0930 20:34:38.454814   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.454822   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.454830   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.454838   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.454847   44409 command_runner.go:130] >     },
	I0930 20:34:38.454854   44409 command_runner.go:130] >     {
	I0930 20:34:38.454867   44409 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0930 20:34:38.454875   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.454886   44409 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0930 20:34:38.454893   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454902   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.454919   44409 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0930 20:34:38.454938   44409 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0930 20:34:38.454946   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454954   44409 command_runner.go:130] >       "size": "63273227",
	I0930 20:34:38.454963   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.454971   44409 command_runner.go:130] >       "username": "nonroot",
	I0930 20:34:38.454981   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.454990   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.454996   44409 command_runner.go:130] >     },
	I0930 20:34:38.455003   44409 command_runner.go:130] >     {
	I0930 20:34:38.455016   44409 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0930 20:34:38.455025   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.455034   44409 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0930 20:34:38.455042   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455050   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.455064   44409 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0930 20:34:38.455079   44409 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0930 20:34:38.455087   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455095   44409 command_runner.go:130] >       "size": "149009664",
	I0930 20:34:38.455104   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.455111   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.455119   44409 command_runner.go:130] >       },
	I0930 20:34:38.455126   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.455136   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.455145   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.455152   44409 command_runner.go:130] >     },
	I0930 20:34:38.455160   44409 command_runner.go:130] >     {
	I0930 20:34:38.455170   44409 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0930 20:34:38.455179   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.455189   44409 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0930 20:34:38.455197   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455204   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.455219   44409 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0930 20:34:38.455234   44409 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0930 20:34:38.455242   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455251   44409 command_runner.go:130] >       "size": "95237600",
	I0930 20:34:38.455261   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.455271   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.455277   44409 command_runner.go:130] >       },
	I0930 20:34:38.455287   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.455296   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.455303   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.455311   44409 command_runner.go:130] >     },
	I0930 20:34:38.455317   44409 command_runner.go:130] >     {
	I0930 20:34:38.455330   44409 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0930 20:34:38.455340   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.455351   44409 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0930 20:34:38.455360   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455367   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.455383   44409 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0930 20:34:38.455399   44409 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0930 20:34:38.455407   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455415   44409 command_runner.go:130] >       "size": "89437508",
	I0930 20:34:38.455424   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.455432   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.455440   44409 command_runner.go:130] >       },
	I0930 20:34:38.455447   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.455457   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.455464   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.455472   44409 command_runner.go:130] >     },
	I0930 20:34:38.455478   44409 command_runner.go:130] >     {
	I0930 20:34:38.455492   44409 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0930 20:34:38.455501   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.455511   44409 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0930 20:34:38.455520   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455544   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.455567   44409 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0930 20:34:38.455582   44409 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0930 20:34:38.455590   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455598   44409 command_runner.go:130] >       "size": "92733849",
	I0930 20:34:38.455609   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.455620   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.455628   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.455635   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.455639   44409 command_runner.go:130] >     },
	I0930 20:34:38.455645   44409 command_runner.go:130] >     {
	I0930 20:34:38.455654   44409 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0930 20:34:38.455662   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.455670   44409 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0930 20:34:38.455676   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455684   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.455699   44409 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0930 20:34:38.455714   44409 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0930 20:34:38.455722   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455731   44409 command_runner.go:130] >       "size": "68420934",
	I0930 20:34:38.455748   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.455759   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.455768   44409 command_runner.go:130] >       },
	I0930 20:34:38.455777   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.455785   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.455793   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.455802   44409 command_runner.go:130] >     },
	I0930 20:34:38.455809   44409 command_runner.go:130] >     {
	I0930 20:34:38.455822   44409 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0930 20:34:38.455832   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.455843   44409 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0930 20:34:38.455852   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455859   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.455874   44409 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0930 20:34:38.455889   44409 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0930 20:34:38.455898   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455906   44409 command_runner.go:130] >       "size": "742080",
	I0930 20:34:38.455914   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.455923   44409 command_runner.go:130] >         "value": "65535"
	I0930 20:34:38.455932   44409 command_runner.go:130] >       },
	I0930 20:34:38.455941   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.455948   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.455957   44409 command_runner.go:130] >       "pinned": true
	I0930 20:34:38.455964   44409 command_runner.go:130] >     }
	I0930 20:34:38.455971   44409 command_runner.go:130] >   ]
	I0930 20:34:38.455978   44409 command_runner.go:130] > }
	I0930 20:34:38.456158   44409 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:34:38.456171   44409 crio.go:433] Images already preloaded, skipping extraction
	I0930 20:34:38.456238   44409 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:34:38.489704   44409 command_runner.go:130] > {
	I0930 20:34:38.489734   44409 command_runner.go:130] >   "images": [
	I0930 20:34:38.489740   44409 command_runner.go:130] >     {
	I0930 20:34:38.489752   44409 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0930 20:34:38.489763   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.489773   44409 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0930 20:34:38.489779   44409 command_runner.go:130] >       ],
	I0930 20:34:38.489785   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.489798   44409 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0930 20:34:38.489808   44409 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0930 20:34:38.489814   44409 command_runner.go:130] >       ],
	I0930 20:34:38.489822   44409 command_runner.go:130] >       "size": "87190579",
	I0930 20:34:38.489829   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.489838   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.489848   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.489858   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.489863   44409 command_runner.go:130] >     },
	I0930 20:34:38.489866   44409 command_runner.go:130] >     {
	I0930 20:34:38.489879   44409 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0930 20:34:38.489883   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.489890   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0930 20:34:38.489896   44409 command_runner.go:130] >       ],
	I0930 20:34:38.489900   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.489907   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0930 20:34:38.489923   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0930 20:34:38.489927   44409 command_runner.go:130] >       ],
	I0930 20:34:38.489931   44409 command_runner.go:130] >       "size": "1363676",
	I0930 20:34:38.489935   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.489941   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.489947   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.489951   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.489956   44409 command_runner.go:130] >     },
	I0930 20:34:38.489960   44409 command_runner.go:130] >     {
	I0930 20:34:38.489967   44409 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0930 20:34:38.489974   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.489979   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0930 20:34:38.489986   44409 command_runner.go:130] >       ],
	I0930 20:34:38.489990   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490000   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0930 20:34:38.490010   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0930 20:34:38.490016   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490020   44409 command_runner.go:130] >       "size": "31470524",
	I0930 20:34:38.490026   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.490029   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490034   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490040   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490044   44409 command_runner.go:130] >     },
	I0930 20:34:38.490048   44409 command_runner.go:130] >     {
	I0930 20:34:38.490054   44409 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0930 20:34:38.490059   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490065   44409 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0930 20:34:38.490070   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490074   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490083   44409 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0930 20:34:38.490096   44409 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0930 20:34:38.490101   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490105   44409 command_runner.go:130] >       "size": "63273227",
	I0930 20:34:38.490112   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.490116   44409 command_runner.go:130] >       "username": "nonroot",
	I0930 20:34:38.490122   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490126   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490131   44409 command_runner.go:130] >     },
	I0930 20:34:38.490135   44409 command_runner.go:130] >     {
	I0930 20:34:38.490143   44409 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0930 20:34:38.490150   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490155   44409 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0930 20:34:38.490160   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490164   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490173   44409 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0930 20:34:38.490181   44409 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0930 20:34:38.490187   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490191   44409 command_runner.go:130] >       "size": "149009664",
	I0930 20:34:38.490197   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.490201   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.490206   44409 command_runner.go:130] >       },
	I0930 20:34:38.490210   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490216   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490220   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490223   44409 command_runner.go:130] >     },
	I0930 20:34:38.490227   44409 command_runner.go:130] >     {
	I0930 20:34:38.490235   44409 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0930 20:34:38.490241   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490246   44409 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0930 20:34:38.490249   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490252   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490291   44409 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0930 20:34:38.490298   44409 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0930 20:34:38.490301   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490307   44409 command_runner.go:130] >       "size": "95237600",
	I0930 20:34:38.490311   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.490316   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.490319   44409 command_runner.go:130] >       },
	I0930 20:34:38.490323   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490330   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490334   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490340   44409 command_runner.go:130] >     },
	I0930 20:34:38.490343   44409 command_runner.go:130] >     {
	I0930 20:34:38.490351   44409 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0930 20:34:38.490358   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490363   44409 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0930 20:34:38.490369   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490373   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490383   44409 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0930 20:34:38.490393   44409 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0930 20:34:38.490398   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490402   44409 command_runner.go:130] >       "size": "89437508",
	I0930 20:34:38.490406   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.490411   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.490415   44409 command_runner.go:130] >       },
	I0930 20:34:38.490420   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490425   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490430   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490434   44409 command_runner.go:130] >     },
	I0930 20:34:38.490439   44409 command_runner.go:130] >     {
	I0930 20:34:38.490445   44409 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0930 20:34:38.490451   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490455   44409 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0930 20:34:38.490461   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490465   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490480   44409 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0930 20:34:38.490490   44409 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0930 20:34:38.490494   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490498   44409 command_runner.go:130] >       "size": "92733849",
	I0930 20:34:38.490503   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.490508   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490515   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490519   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490524   44409 command_runner.go:130] >     },
	I0930 20:34:38.490527   44409 command_runner.go:130] >     {
	I0930 20:34:38.490535   44409 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0930 20:34:38.490540   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490544   44409 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0930 20:34:38.490551   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490555   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490564   44409 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0930 20:34:38.490578   44409 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0930 20:34:38.490586   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490595   44409 command_runner.go:130] >       "size": "68420934",
	I0930 20:34:38.490601   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.490607   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.490617   44409 command_runner.go:130] >       },
	I0930 20:34:38.490623   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490629   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490635   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490640   44409 command_runner.go:130] >     },
	I0930 20:34:38.490644   44409 command_runner.go:130] >     {
	I0930 20:34:38.490653   44409 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0930 20:34:38.490657   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490662   44409 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0930 20:34:38.490669   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490675   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490682   44409 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0930 20:34:38.490692   44409 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0930 20:34:38.490698   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490702   44409 command_runner.go:130] >       "size": "742080",
	I0930 20:34:38.490708   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.490712   44409 command_runner.go:130] >         "value": "65535"
	I0930 20:34:38.490717   44409 command_runner.go:130] >       },
	I0930 20:34:38.490722   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490727   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490732   44409 command_runner.go:130] >       "pinned": true
	I0930 20:34:38.490737   44409 command_runner.go:130] >     }
	I0930 20:34:38.490740   44409 command_runner.go:130] >   ]
	I0930 20:34:38.490744   44409 command_runner.go:130] > }
	I0930 20:34:38.490860   44409 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:34:38.490870   44409 cache_images.go:84] Images are preloaded, skipping loading
	I0930 20:34:38.490879   44409 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.31.1 crio true true} ...
	I0930 20:34:38.491001   44409 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-103579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-103579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:34:38.491064   44409 ssh_runner.go:195] Run: crio config
	I0930 20:34:38.532237   44409 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0930 20:34:38.532267   44409 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0930 20:34:38.532277   44409 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0930 20:34:38.532282   44409 command_runner.go:130] > #
	I0930 20:34:38.532309   44409 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0930 20:34:38.532319   44409 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0930 20:34:38.532328   44409 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0930 20:34:38.532336   44409 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0930 20:34:38.532341   44409 command_runner.go:130] > # reload'.
	I0930 20:34:38.532349   44409 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0930 20:34:38.532359   44409 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0930 20:34:38.532369   44409 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0930 20:34:38.532382   44409 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0930 20:34:38.532388   44409 command_runner.go:130] > [crio]
	I0930 20:34:38.532399   44409 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0930 20:34:38.532412   44409 command_runner.go:130] > # containers images, in this directory.
	I0930 20:34:38.532423   44409 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0930 20:34:38.532439   44409 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0930 20:34:38.532451   44409 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0930 20:34:38.532465   44409 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0930 20:34:38.532698   44409 command_runner.go:130] > # imagestore = ""
	I0930 20:34:38.532722   44409 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0930 20:34:38.532732   44409 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0930 20:34:38.532871   44409 command_runner.go:130] > storage_driver = "overlay"
	I0930 20:34:38.532891   44409 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0930 20:34:38.532901   44409 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0930 20:34:38.532908   44409 command_runner.go:130] > storage_option = [
	I0930 20:34:38.533047   44409 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0930 20:34:38.533062   44409 command_runner.go:130] > ]
	I0930 20:34:38.533073   44409 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0930 20:34:38.533082   44409 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0930 20:34:38.533292   44409 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0930 20:34:38.533307   44409 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0930 20:34:38.533319   44409 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0930 20:34:38.533328   44409 command_runner.go:130] > # always happen on a node reboot
	I0930 20:34:38.533612   44409 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0930 20:34:38.533635   44409 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0930 20:34:38.533645   44409 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0930 20:34:38.533655   44409 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0930 20:34:38.533817   44409 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0930 20:34:38.533837   44409 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0930 20:34:38.533851   44409 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0930 20:34:38.534016   44409 command_runner.go:130] > # internal_wipe = true
	I0930 20:34:38.534034   44409 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0930 20:34:38.534044   44409 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0930 20:34:38.534486   44409 command_runner.go:130] > # internal_repair = false
	I0930 20:34:38.534503   44409 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0930 20:34:38.534514   44409 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0930 20:34:38.534526   44409 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0930 20:34:38.534704   44409 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0930 20:34:38.534719   44409 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0930 20:34:38.534725   44409 command_runner.go:130] > [crio.api]
	I0930 20:34:38.534733   44409 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0930 20:34:38.534915   44409 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0930 20:34:38.534936   44409 command_runner.go:130] > # IP address on which the stream server will listen.
	I0930 20:34:38.535152   44409 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0930 20:34:38.535169   44409 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0930 20:34:38.535178   44409 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0930 20:34:38.535373   44409 command_runner.go:130] > # stream_port = "0"
	I0930 20:34:38.535385   44409 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0930 20:34:38.535658   44409 command_runner.go:130] > # stream_enable_tls = false
	I0930 20:34:38.535674   44409 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0930 20:34:38.535864   44409 command_runner.go:130] > # stream_idle_timeout = ""
	I0930 20:34:38.535880   44409 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0930 20:34:38.535890   44409 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0930 20:34:38.535899   44409 command_runner.go:130] > # minutes.
	I0930 20:34:38.536060   44409 command_runner.go:130] > # stream_tls_cert = ""
	I0930 20:34:38.536074   44409 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0930 20:34:38.536080   44409 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0930 20:34:38.536380   44409 command_runner.go:130] > # stream_tls_key = ""
	I0930 20:34:38.536398   44409 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0930 20:34:38.536409   44409 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0930 20:34:38.536427   44409 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0930 20:34:38.536615   44409 command_runner.go:130] > # stream_tls_ca = ""
	I0930 20:34:38.536627   44409 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0930 20:34:38.537111   44409 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0930 20:34:38.537131   44409 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0930 20:34:38.537140   44409 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0930 20:34:38.537151   44409 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0930 20:34:38.537162   44409 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0930 20:34:38.537168   44409 command_runner.go:130] > [crio.runtime]
	I0930 20:34:38.537179   44409 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0930 20:34:38.537189   44409 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0930 20:34:38.537197   44409 command_runner.go:130] > # "nofile=1024:2048"
	I0930 20:34:38.537206   44409 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0930 20:34:38.537215   44409 command_runner.go:130] > # default_ulimits = [
	I0930 20:34:38.537219   44409 command_runner.go:130] > # ]
	I0930 20:34:38.537229   44409 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0930 20:34:38.537241   44409 command_runner.go:130] > # no_pivot = false
	I0930 20:34:38.537249   44409 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0930 20:34:38.537261   44409 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0930 20:34:38.537271   44409 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0930 20:34:38.537280   44409 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0930 20:34:38.537290   44409 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0930 20:34:38.537308   44409 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0930 20:34:38.537319   44409 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0930 20:34:38.537326   44409 command_runner.go:130] > # Cgroup setting for conmon
	I0930 20:34:38.537339   44409 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0930 20:34:38.537349   44409 command_runner.go:130] > conmon_cgroup = "pod"
	I0930 20:34:38.537358   44409 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0930 20:34:38.537368   44409 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0930 20:34:38.537382   44409 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0930 20:34:38.537391   44409 command_runner.go:130] > conmon_env = [
	I0930 20:34:38.537405   44409 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0930 20:34:38.537414   44409 command_runner.go:130] > ]
	I0930 20:34:38.537425   44409 command_runner.go:130] > # Additional environment variables to set for all the
	I0930 20:34:38.537437   44409 command_runner.go:130] > # containers. These are overridden if set in the
	I0930 20:34:38.537449   44409 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0930 20:34:38.537456   44409 command_runner.go:130] > # default_env = [
	I0930 20:34:38.537465   44409 command_runner.go:130] > # ]
	I0930 20:34:38.537477   44409 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0930 20:34:38.537491   44409 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0930 20:34:38.537503   44409 command_runner.go:130] > # selinux = false
	I0930 20:34:38.537514   44409 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0930 20:34:38.537527   44409 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0930 20:34:38.537536   44409 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0930 20:34:38.537545   44409 command_runner.go:130] > # seccomp_profile = ""
	I0930 20:34:38.537554   44409 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0930 20:34:38.537567   44409 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0930 20:34:38.537579   44409 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0930 20:34:38.537589   44409 command_runner.go:130] > # which might increase security.
	I0930 20:34:38.537599   44409 command_runner.go:130] > # This option is currently deprecated,
	I0930 20:34:38.537608   44409 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0930 20:34:38.537618   44409 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0930 20:34:38.537628   44409 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0930 20:34:38.537641   44409 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0930 20:34:38.537654   44409 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0930 20:34:38.537668   44409 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0930 20:34:38.537679   44409 command_runner.go:130] > # This option supports live configuration reload.
	I0930 20:34:38.537686   44409 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0930 20:34:38.537698   44409 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0930 20:34:38.537706   44409 command_runner.go:130] > # the cgroup blockio controller.
	I0930 20:34:38.537713   44409 command_runner.go:130] > # blockio_config_file = ""
	I0930 20:34:38.537727   44409 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0930 20:34:38.537736   44409 command_runner.go:130] > # blockio parameters.
	I0930 20:34:38.537743   44409 command_runner.go:130] > # blockio_reload = false
	I0930 20:34:38.537757   44409 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0930 20:34:38.537767   44409 command_runner.go:130] > # irqbalance daemon.
	I0930 20:34:38.537776   44409 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0930 20:34:38.537789   44409 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0930 20:34:38.537805   44409 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0930 20:34:38.537816   44409 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0930 20:34:38.537831   44409 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0930 20:34:38.537844   44409 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0930 20:34:38.537854   44409 command_runner.go:130] > # This option supports live configuration reload.
	I0930 20:34:38.537866   44409 command_runner.go:130] > # rdt_config_file = ""
	I0930 20:34:38.537876   44409 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0930 20:34:38.537885   44409 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0930 20:34:38.537906   44409 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0930 20:34:38.537916   44409 command_runner.go:130] > # separate_pull_cgroup = ""
	I0930 20:34:38.537926   44409 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0930 20:34:38.537940   44409 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0930 20:34:38.537949   44409 command_runner.go:130] > # will be added.
	I0930 20:34:38.537955   44409 command_runner.go:130] > # default_capabilities = [
	I0930 20:34:38.537962   44409 command_runner.go:130] > # 	"CHOWN",
	I0930 20:34:38.537971   44409 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0930 20:34:38.537977   44409 command_runner.go:130] > # 	"FSETID",
	I0930 20:34:38.537986   44409 command_runner.go:130] > # 	"FOWNER",
	I0930 20:34:38.537994   44409 command_runner.go:130] > # 	"SETGID",
	I0930 20:34:38.538005   44409 command_runner.go:130] > # 	"SETUID",
	I0930 20:34:38.538016   44409 command_runner.go:130] > # 	"SETPCAP",
	I0930 20:34:38.538025   44409 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0930 20:34:38.538030   44409 command_runner.go:130] > # 	"KILL",
	I0930 20:34:38.538039   44409 command_runner.go:130] > # ]
	I0930 20:34:38.538051   44409 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0930 20:34:38.538065   44409 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0930 20:34:38.538075   44409 command_runner.go:130] > # add_inheritable_capabilities = false
	I0930 20:34:38.538089   44409 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0930 20:34:38.538103   44409 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0930 20:34:38.538112   44409 command_runner.go:130] > default_sysctls = [
	I0930 20:34:38.538120   44409 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0930 20:34:38.538128   44409 command_runner.go:130] > ]
	I0930 20:34:38.538136   44409 command_runner.go:130] > # List of devices on the host that a
	I0930 20:34:38.538149   44409 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0930 20:34:38.538158   44409 command_runner.go:130] > # allowed_devices = [
	I0930 20:34:38.538165   44409 command_runner.go:130] > # 	"/dev/fuse",
	I0930 20:34:38.538172   44409 command_runner.go:130] > # ]
	I0930 20:34:38.538181   44409 command_runner.go:130] > # List of additional devices. specified as
	I0930 20:34:38.538197   44409 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0930 20:34:38.538208   44409 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0930 20:34:38.538220   44409 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0930 20:34:38.538229   44409 command_runner.go:130] > # additional_devices = [
	I0930 20:34:38.538234   44409 command_runner.go:130] > # ]
	I0930 20:34:38.538247   44409 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0930 20:34:38.538256   44409 command_runner.go:130] > # cdi_spec_dirs = [
	I0930 20:34:38.538260   44409 command_runner.go:130] > # 	"/etc/cdi",
	I0930 20:34:38.538265   44409 command_runner.go:130] > # 	"/var/run/cdi",
	I0930 20:34:38.538271   44409 command_runner.go:130] > # ]
	I0930 20:34:38.538282   44409 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0930 20:34:38.538297   44409 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0930 20:34:38.538311   44409 command_runner.go:130] > # Defaults to false.
	I0930 20:34:38.538321   44409 command_runner.go:130] > # device_ownership_from_security_context = false
	I0930 20:34:38.538331   44409 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0930 20:34:38.538341   44409 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0930 20:34:38.538350   44409 command_runner.go:130] > # hooks_dir = [
	I0930 20:34:38.538358   44409 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0930 20:34:38.538366   44409 command_runner.go:130] > # ]
	I0930 20:34:38.538375   44409 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0930 20:34:38.538389   44409 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0930 20:34:38.538400   44409 command_runner.go:130] > # its default mounts from the following two files:
	I0930 20:34:38.538408   44409 command_runner.go:130] > #
	I0930 20:34:38.538418   44409 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0930 20:34:38.538431   44409 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0930 20:34:38.538444   44409 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0930 20:34:38.538453   44409 command_runner.go:130] > #
	I0930 20:34:38.538463   44409 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0930 20:34:38.538476   44409 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0930 20:34:38.538489   44409 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0930 20:34:38.538501   44409 command_runner.go:130] > #      only add mounts it finds in this file.
	I0930 20:34:38.538509   44409 command_runner.go:130] > #
	I0930 20:34:38.538516   44409 command_runner.go:130] > # default_mounts_file = ""
	I0930 20:34:38.538527   44409 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0930 20:34:38.538542   44409 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0930 20:34:38.538552   44409 command_runner.go:130] > pids_limit = 1024
	I0930 20:34:38.538562   44409 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0930 20:34:38.538573   44409 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0930 20:34:38.538586   44409 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0930 20:34:38.538600   44409 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0930 20:34:38.538613   44409 command_runner.go:130] > # log_size_max = -1
	I0930 20:34:38.538627   44409 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0930 20:34:38.538636   44409 command_runner.go:130] > # log_to_journald = false
	I0930 20:34:38.538645   44409 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0930 20:34:38.538656   44409 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0930 20:34:38.538667   44409 command_runner.go:130] > # Path to directory for container attach sockets.
	I0930 20:34:38.538677   44409 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0930 20:34:38.538686   44409 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0930 20:34:38.538697   44409 command_runner.go:130] > # bind_mount_prefix = ""
	I0930 20:34:38.538708   44409 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0930 20:34:38.538720   44409 command_runner.go:130] > # read_only = false
	I0930 20:34:38.538730   44409 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0930 20:34:38.538742   44409 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0930 20:34:38.538753   44409 command_runner.go:130] > # live configuration reload.
	I0930 20:34:38.538760   44409 command_runner.go:130] > # log_level = "info"
	I0930 20:34:38.538771   44409 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0930 20:34:38.538782   44409 command_runner.go:130] > # This option supports live configuration reload.
	I0930 20:34:38.538791   44409 command_runner.go:130] > # log_filter = ""
	I0930 20:34:38.538802   44409 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0930 20:34:38.538814   44409 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0930 20:34:38.538822   44409 command_runner.go:130] > # separated by comma.
	I0930 20:34:38.538836   44409 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 20:34:38.538855   44409 command_runner.go:130] > # uid_mappings = ""
	I0930 20:34:38.538869   44409 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0930 20:34:38.538881   44409 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0930 20:34:38.538888   44409 command_runner.go:130] > # separated by comma.
	I0930 20:34:38.538905   44409 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 20:34:38.538911   44409 command_runner.go:130] > # gid_mappings = ""
	I0930 20:34:38.538921   44409 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0930 20:34:38.538930   44409 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0930 20:34:38.538943   44409 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0930 20:34:38.538956   44409 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 20:34:38.538964   44409 command_runner.go:130] > # minimum_mappable_uid = -1
	I0930 20:34:38.538974   44409 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0930 20:34:38.538984   44409 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0930 20:34:38.538997   44409 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0930 20:34:38.539009   44409 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 20:34:38.539018   44409 command_runner.go:130] > # minimum_mappable_gid = -1
	I0930 20:34:38.539027   44409 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0930 20:34:38.539038   44409 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0930 20:34:38.539049   44409 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0930 20:34:38.539055   44409 command_runner.go:130] > # ctr_stop_timeout = 30
	I0930 20:34:38.539067   44409 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0930 20:34:38.539079   44409 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0930 20:34:38.539090   44409 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0930 20:34:38.539101   44409 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0930 20:34:38.539110   44409 command_runner.go:130] > drop_infra_ctr = false
	I0930 20:34:38.539120   44409 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0930 20:34:38.539135   44409 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0930 20:34:38.539149   44409 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0930 20:34:38.539159   44409 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0930 20:34:38.539169   44409 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0930 20:34:38.539181   44409 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0930 20:34:38.539191   44409 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0930 20:34:38.539204   44409 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0930 20:34:38.539213   44409 command_runner.go:130] > # shared_cpuset = ""
	I0930 20:34:38.539223   44409 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0930 20:34:38.539233   44409 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0930 20:34:38.539243   44409 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0930 20:34:38.539257   44409 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0930 20:34:38.539263   44409 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0930 20:34:38.539272   44409 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0930 20:34:38.539286   44409 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0930 20:34:38.539297   44409 command_runner.go:130] > # enable_criu_support = false
	I0930 20:34:38.539314   44409 command_runner.go:130] > # Enable/disable the generation of the container,
	I0930 20:34:38.539327   44409 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0930 20:34:38.539337   44409 command_runner.go:130] > # enable_pod_events = false
	I0930 20:34:38.539346   44409 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0930 20:34:38.539358   44409 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0930 20:34:38.539367   44409 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0930 20:34:38.539374   44409 command_runner.go:130] > # default_runtime = "runc"
	I0930 20:34:38.539386   44409 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0930 20:34:38.539401   44409 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0930 20:34:38.539419   44409 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0930 20:34:38.539431   44409 command_runner.go:130] > # creation as a file is not desired either.
	I0930 20:34:38.539447   44409 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0930 20:34:38.539459   44409 command_runner.go:130] > # the hostname is being managed dynamically.
	I0930 20:34:38.539468   44409 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0930 20:34:38.539474   44409 command_runner.go:130] > # ]
	I0930 20:34:38.539486   44409 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0930 20:34:38.539500   44409 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0930 20:34:38.539511   44409 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0930 20:34:38.539522   44409 command_runner.go:130] > # Each entry in the table should follow the format:
	I0930 20:34:38.539548   44409 command_runner.go:130] > #
	I0930 20:34:38.539559   44409 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0930 20:34:38.539567   44409 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0930 20:34:38.539590   44409 command_runner.go:130] > # runtime_type = "oci"
	I0930 20:34:38.539601   44409 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0930 20:34:38.539611   44409 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0930 20:34:38.539618   44409 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0930 20:34:38.539628   44409 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0930 20:34:38.539635   44409 command_runner.go:130] > # monitor_env = []
	I0930 20:34:38.539646   44409 command_runner.go:130] > # privileged_without_host_devices = false
	I0930 20:34:38.539657   44409 command_runner.go:130] > # allowed_annotations = []
	I0930 20:34:38.539669   44409 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0930 20:34:38.539678   44409 command_runner.go:130] > # Where:
	I0930 20:34:38.539690   44409 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0930 20:34:38.539702   44409 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0930 20:34:38.539716   44409 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0930 20:34:38.539728   44409 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0930 20:34:38.539737   44409 command_runner.go:130] > #   in $PATH.
	I0930 20:34:38.539747   44409 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0930 20:34:38.539758   44409 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0930 20:34:38.539768   44409 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0930 20:34:38.539777   44409 command_runner.go:130] > #   state.
	I0930 20:34:38.539787   44409 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0930 20:34:38.539800   44409 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0930 20:34:38.539810   44409 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0930 20:34:38.539819   44409 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0930 20:34:38.539830   44409 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0930 20:34:38.539840   44409 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0930 20:34:38.539848   44409 command_runner.go:130] > #   The currently recognized values are:
	I0930 20:34:38.539860   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0930 20:34:38.539873   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0930 20:34:38.539885   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0930 20:34:38.539895   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0930 20:34:38.539904   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0930 20:34:38.539919   44409 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0930 20:34:38.539932   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0930 20:34:38.539945   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0930 20:34:38.539956   44409 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0930 20:34:38.539970   44409 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0930 20:34:38.539978   44409 command_runner.go:130] > #   deprecated option "conmon".
	I0930 20:34:38.539989   44409 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0930 20:34:38.540001   44409 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0930 20:34:38.540013   44409 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0930 20:34:38.540024   44409 command_runner.go:130] > #   should be moved to the container's cgroup
	I0930 20:34:38.540036   44409 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0930 20:34:38.540048   44409 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0930 20:34:38.540058   44409 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0930 20:34:38.540066   44409 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0930 20:34:38.540070   44409 command_runner.go:130] > #
	I0930 20:34:38.540084   44409 command_runner.go:130] > # Using the seccomp notifier feature:
	I0930 20:34:38.540094   44409 command_runner.go:130] > #
	I0930 20:34:38.540102   44409 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0930 20:34:38.540115   44409 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0930 20:34:38.540123   44409 command_runner.go:130] > #
	I0930 20:34:38.540131   44409 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0930 20:34:38.540144   44409 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0930 20:34:38.540149   44409 command_runner.go:130] > #
	I0930 20:34:38.540165   44409 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0930 20:34:38.540173   44409 command_runner.go:130] > # feature.
	I0930 20:34:38.540179   44409 command_runner.go:130] > #
	I0930 20:34:38.540190   44409 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0930 20:34:38.540203   44409 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0930 20:34:38.540214   44409 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0930 20:34:38.540223   44409 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0930 20:34:38.540229   44409 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0930 20:34:38.540233   44409 command_runner.go:130] > #
	I0930 20:34:38.540239   44409 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0930 20:34:38.540245   44409 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0930 20:34:38.540250   44409 command_runner.go:130] > #
	I0930 20:34:38.540255   44409 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0930 20:34:38.540262   44409 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0930 20:34:38.540266   44409 command_runner.go:130] > #
	I0930 20:34:38.540272   44409 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0930 20:34:38.540281   44409 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0930 20:34:38.540287   44409 command_runner.go:130] > # limitation.
	I0930 20:34:38.540299   44409 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0930 20:34:38.540311   44409 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0930 20:34:38.540315   44409 command_runner.go:130] > runtime_type = "oci"
	I0930 20:34:38.540319   44409 command_runner.go:130] > runtime_root = "/run/runc"
	I0930 20:34:38.540323   44409 command_runner.go:130] > runtime_config_path = ""
	I0930 20:34:38.540328   44409 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0930 20:34:38.540332   44409 command_runner.go:130] > monitor_cgroup = "pod"
	I0930 20:34:38.540336   44409 command_runner.go:130] > monitor_exec_cgroup = ""
	I0930 20:34:38.540340   44409 command_runner.go:130] > monitor_env = [
	I0930 20:34:38.540345   44409 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0930 20:34:38.540351   44409 command_runner.go:130] > ]
	I0930 20:34:38.540355   44409 command_runner.go:130] > privileged_without_host_devices = false
	I0930 20:34:38.540362   44409 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0930 20:34:38.540368   44409 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0930 20:34:38.540375   44409 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0930 20:34:38.540384   44409 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0930 20:34:38.540391   44409 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0930 20:34:38.540398   44409 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0930 20:34:38.540407   44409 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0930 20:34:38.540416   44409 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0930 20:34:38.540421   44409 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0930 20:34:38.540428   44409 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0930 20:34:38.540433   44409 command_runner.go:130] > # Example:
	I0930 20:34:38.540438   44409 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0930 20:34:38.540444   44409 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0930 20:34:38.540449   44409 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0930 20:34:38.540455   44409 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0930 20:34:38.540459   44409 command_runner.go:130] > # cpuset = 0
	I0930 20:34:38.540465   44409 command_runner.go:130] > # cpushares = "0-1"
	I0930 20:34:38.540468   44409 command_runner.go:130] > # Where:
	I0930 20:34:38.540473   44409 command_runner.go:130] > # The workload name is workload-type.
	I0930 20:34:38.540481   44409 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0930 20:34:38.540486   44409 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0930 20:34:38.540494   44409 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0930 20:34:38.540501   44409 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0930 20:34:38.540508   44409 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0930 20:34:38.540513   44409 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0930 20:34:38.540521   44409 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0930 20:34:38.540525   44409 command_runner.go:130] > # Default value is set to true
	I0930 20:34:38.540530   44409 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0930 20:34:38.540535   44409 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0930 20:34:38.540539   44409 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0930 20:34:38.540545   44409 command_runner.go:130] > # Default value is set to 'false'
	I0930 20:34:38.540549   44409 command_runner.go:130] > # disable_hostport_mapping = false
	I0930 20:34:38.540559   44409 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0930 20:34:38.540561   44409 command_runner.go:130] > #
	I0930 20:34:38.540567   44409 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0930 20:34:38.540578   44409 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0930 20:34:38.540587   44409 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0930 20:34:38.540599   44409 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0930 20:34:38.540608   44409 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0930 20:34:38.540613   44409 command_runner.go:130] > [crio.image]
	I0930 20:34:38.540622   44409 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0930 20:34:38.540628   44409 command_runner.go:130] > # default_transport = "docker://"
	I0930 20:34:38.540637   44409 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0930 20:34:38.540646   44409 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0930 20:34:38.540652   44409 command_runner.go:130] > # global_auth_file = ""
	I0930 20:34:38.540660   44409 command_runner.go:130] > # The image used to instantiate infra containers.
	I0930 20:34:38.540665   44409 command_runner.go:130] > # This option supports live configuration reload.
	I0930 20:34:38.540670   44409 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0930 20:34:38.540676   44409 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0930 20:34:38.540681   44409 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0930 20:34:38.540686   44409 command_runner.go:130] > # This option supports live configuration reload.
	I0930 20:34:38.540693   44409 command_runner.go:130] > # pause_image_auth_file = ""
	I0930 20:34:38.540699   44409 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0930 20:34:38.540707   44409 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0930 20:34:38.540714   44409 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0930 20:34:38.540722   44409 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0930 20:34:38.540726   44409 command_runner.go:130] > # pause_command = "/pause"
	I0930 20:34:38.540731   44409 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0930 20:34:38.540739   44409 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0930 20:34:38.540744   44409 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0930 20:34:38.540750   44409 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0930 20:34:38.540755   44409 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0930 20:34:38.540762   44409 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0930 20:34:38.540766   44409 command_runner.go:130] > # pinned_images = [
	I0930 20:34:38.540769   44409 command_runner.go:130] > # ]
	I0930 20:34:38.540775   44409 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0930 20:34:38.540783   44409 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0930 20:34:38.540789   44409 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0930 20:34:38.540795   44409 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0930 20:34:38.540801   44409 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0930 20:34:38.540807   44409 command_runner.go:130] > # signature_policy = ""
	I0930 20:34:38.540812   44409 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0930 20:34:38.540818   44409 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0930 20:34:38.540826   44409 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0930 20:34:38.540832   44409 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0930 20:34:38.540839   44409 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0930 20:34:38.540844   44409 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0930 20:34:38.540850   44409 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0930 20:34:38.540856   44409 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0930 20:34:38.540862   44409 command_runner.go:130] > # changing them here.
	I0930 20:34:38.540866   44409 command_runner.go:130] > # insecure_registries = [
	I0930 20:34:38.540869   44409 command_runner.go:130] > # ]
	I0930 20:34:38.540874   44409 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0930 20:34:38.540881   44409 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0930 20:34:38.540885   44409 command_runner.go:130] > # image_volumes = "mkdir"
	I0930 20:34:38.540891   44409 command_runner.go:130] > # Temporary directory to use for storing big files
	I0930 20:34:38.540896   44409 command_runner.go:130] > # big_files_temporary_dir = ""
	I0930 20:34:38.540902   44409 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0930 20:34:38.540907   44409 command_runner.go:130] > # CNI plugins.
	I0930 20:34:38.540911   44409 command_runner.go:130] > [crio.network]
	I0930 20:34:38.540919   44409 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0930 20:34:38.540924   44409 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0930 20:34:38.540930   44409 command_runner.go:130] > # cni_default_network = ""
	I0930 20:34:38.540935   44409 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0930 20:34:38.540941   44409 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0930 20:34:38.540947   44409 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0930 20:34:38.540952   44409 command_runner.go:130] > # plugin_dirs = [
	I0930 20:34:38.540956   44409 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0930 20:34:38.540959   44409 command_runner.go:130] > # ]
	I0930 20:34:38.540965   44409 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0930 20:34:38.540969   44409 command_runner.go:130] > [crio.metrics]
	I0930 20:34:38.540973   44409 command_runner.go:130] > # Globally enable or disable metrics support.
	I0930 20:34:38.540979   44409 command_runner.go:130] > enable_metrics = true
	I0930 20:34:38.540984   44409 command_runner.go:130] > # Specify enabled metrics collectors.
	I0930 20:34:38.540988   44409 command_runner.go:130] > # Per default all metrics are enabled.
	I0930 20:34:38.540995   44409 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0930 20:34:38.541000   44409 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0930 20:34:38.541008   44409 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0930 20:34:38.541012   44409 command_runner.go:130] > # metrics_collectors = [
	I0930 20:34:38.541018   44409 command_runner.go:130] > # 	"operations",
	I0930 20:34:38.541023   44409 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0930 20:34:38.541027   44409 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0930 20:34:38.541031   44409 command_runner.go:130] > # 	"operations_errors",
	I0930 20:34:38.541035   44409 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0930 20:34:38.541039   44409 command_runner.go:130] > # 	"image_pulls_by_name",
	I0930 20:34:38.541043   44409 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0930 20:34:38.541049   44409 command_runner.go:130] > # 	"image_pulls_failures",
	I0930 20:34:38.541053   44409 command_runner.go:130] > # 	"image_pulls_successes",
	I0930 20:34:38.541059   44409 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0930 20:34:38.541063   44409 command_runner.go:130] > # 	"image_layer_reuse",
	I0930 20:34:38.541070   44409 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0930 20:34:38.541074   44409 command_runner.go:130] > # 	"containers_oom_total",
	I0930 20:34:38.541080   44409 command_runner.go:130] > # 	"containers_oom",
	I0930 20:34:38.541084   44409 command_runner.go:130] > # 	"processes_defunct",
	I0930 20:34:38.541087   44409 command_runner.go:130] > # 	"operations_total",
	I0930 20:34:38.541091   44409 command_runner.go:130] > # 	"operations_latency_seconds",
	I0930 20:34:38.541098   44409 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0930 20:34:38.541102   44409 command_runner.go:130] > # 	"operations_errors_total",
	I0930 20:34:38.541108   44409 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0930 20:34:38.541112   44409 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0930 20:34:38.541116   44409 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0930 20:34:38.541122   44409 command_runner.go:130] > # 	"image_pulls_success_total",
	I0930 20:34:38.541126   44409 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0930 20:34:38.541130   44409 command_runner.go:130] > # 	"containers_oom_count_total",
	I0930 20:34:38.541136   44409 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0930 20:34:38.541142   44409 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0930 20:34:38.541145   44409 command_runner.go:130] > # ]
	I0930 20:34:38.541149   44409 command_runner.go:130] > # The port on which the metrics server will listen.
	I0930 20:34:38.541155   44409 command_runner.go:130] > # metrics_port = 9090
	I0930 20:34:38.541160   44409 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0930 20:34:38.541166   44409 command_runner.go:130] > # metrics_socket = ""
	I0930 20:34:38.541171   44409 command_runner.go:130] > # The certificate for the secure metrics server.
	I0930 20:34:38.541176   44409 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0930 20:34:38.541185   44409 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0930 20:34:38.541189   44409 command_runner.go:130] > # certificate on any modification event.
	I0930 20:34:38.541197   44409 command_runner.go:130] > # metrics_cert = ""
	I0930 20:34:38.541205   44409 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0930 20:34:38.541215   44409 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0930 20:34:38.541223   44409 command_runner.go:130] > # metrics_key = ""
	I0930 20:34:38.541233   44409 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0930 20:34:38.541241   44409 command_runner.go:130] > [crio.tracing]
	I0930 20:34:38.541248   44409 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0930 20:34:38.541254   44409 command_runner.go:130] > # enable_tracing = false
	I0930 20:34:38.541262   44409 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0930 20:34:38.541268   44409 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0930 20:34:38.541281   44409 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0930 20:34:38.541289   44409 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0930 20:34:38.541298   44409 command_runner.go:130] > # CRI-O NRI configuration.
	I0930 20:34:38.541309   44409 command_runner.go:130] > [crio.nri]
	I0930 20:34:38.541319   44409 command_runner.go:130] > # Globally enable or disable NRI.
	I0930 20:34:38.541324   44409 command_runner.go:130] > # enable_nri = false
	I0930 20:34:38.541332   44409 command_runner.go:130] > # NRI socket to listen on.
	I0930 20:34:38.541342   44409 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0930 20:34:38.541349   44409 command_runner.go:130] > # NRI plugin directory to use.
	I0930 20:34:38.541359   44409 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0930 20:34:38.541366   44409 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0930 20:34:38.541374   44409 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0930 20:34:38.541380   44409 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0930 20:34:38.541387   44409 command_runner.go:130] > # nri_disable_connections = false
	I0930 20:34:38.541394   44409 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0930 20:34:38.541399   44409 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0930 20:34:38.541405   44409 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0930 20:34:38.541409   44409 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0930 20:34:38.541417   44409 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0930 20:34:38.541421   44409 command_runner.go:130] > [crio.stats]
	I0930 20:34:38.541429   44409 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0930 20:34:38.541434   44409 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0930 20:34:38.541438   44409 command_runner.go:130] > # stats_collection_period = 0
	I0930 20:34:38.541458   44409 command_runner.go:130] ! time="2024-09-30 20:34:38.499885576Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0930 20:34:38.541480   44409 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0930 20:34:38.541550   44409 cni.go:84] Creating CNI manager for ""
	I0930 20:34:38.541563   44409 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0930 20:34:38.541573   44409 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 20:34:38.541607   44409 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-103579 NodeName:multinode-103579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 20:34:38.541724   44409 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-103579"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 20:34:38.541779   44409 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:34:38.551912   44409 command_runner.go:130] > kubeadm
	I0930 20:34:38.551939   44409 command_runner.go:130] > kubectl
	I0930 20:34:38.551945   44409 command_runner.go:130] > kubelet
	I0930 20:34:38.551959   44409 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 20:34:38.552017   44409 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 20:34:38.561170   44409 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0930 20:34:38.577026   44409 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:34:38.592696   44409 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0930 20:34:38.609641   44409 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0930 20:34:38.613982   44409 command_runner.go:130] > 192.168.39.58	control-plane.minikube.internal
	I0930 20:34:38.614073   44409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:34:38.771880   44409 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:34:38.786032   44409 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579 for IP: 192.168.39.58
	I0930 20:34:38.786057   44409 certs.go:194] generating shared ca certs ...
	I0930 20:34:38.786084   44409 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:34:38.786253   44409 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:34:38.786311   44409 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:34:38.786335   44409 certs.go:256] generating profile certs ...
	I0930 20:34:38.786443   44409 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/client.key
	I0930 20:34:38.786526   44409 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/apiserver.key.bac6694b
	I0930 20:34:38.786579   44409 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/proxy-client.key
	I0930 20:34:38.786592   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 20:34:38.786611   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 20:34:38.786630   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 20:34:38.786649   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 20:34:38.786668   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 20:34:38.786688   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 20:34:38.786706   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 20:34:38.786726   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 20:34:38.786794   44409 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:34:38.786834   44409 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:34:38.786848   44409 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:34:38.786884   44409 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:34:38.786917   44409 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:34:38.786947   44409 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:34:38.787000   44409 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:34:38.787037   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:34:38.787056   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 20:34:38.787075   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 20:34:38.787662   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:34:38.811486   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:34:38.835550   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:34:38.860012   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:34:38.883593   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 20:34:38.906912   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 20:34:38.931844   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:34:38.957905   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 20:34:38.983030   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:34:39.007696   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:34:39.031307   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:34:39.056001   44409 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 20:34:39.072665   44409 ssh_runner.go:195] Run: openssl version
	I0930 20:34:39.078085   44409 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0930 20:34:39.078174   44409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:34:39.088950   44409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:34:39.093222   44409 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:34:39.093271   44409 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:34:39.093322   44409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:34:39.098358   44409 command_runner.go:130] > b5213941
	I0930 20:34:39.098507   44409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:34:39.108201   44409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:34:39.118829   44409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:34:39.123386   44409 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:34:39.123427   44409 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:34:39.123469   44409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:34:39.129330   44409 command_runner.go:130] > 51391683
	I0930 20:34:39.129414   44409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:34:39.140325   44409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:34:39.151307   44409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:34:39.155593   44409 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:34:39.155623   44409 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:34:39.155680   44409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:34:39.161510   44409 command_runner.go:130] > 3ec20f2e
	I0930 20:34:39.161583   44409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:34:39.171797   44409 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:34:39.176058   44409 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:34:39.176084   44409 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0930 20:34:39.176090   44409 command_runner.go:130] > Device: 253,1	Inode: 9431080     Links: 1
	I0930 20:34:39.176096   44409 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0930 20:34:39.176104   44409 command_runner.go:130] > Access: 2024-09-30 20:27:28.130937113 +0000
	I0930 20:34:39.176110   44409 command_runner.go:130] > Modify: 2024-09-30 20:27:28.130937113 +0000
	I0930 20:34:39.176114   44409 command_runner.go:130] > Change: 2024-09-30 20:27:28.130937113 +0000
	I0930 20:34:39.176119   44409 command_runner.go:130] >  Birth: 2024-09-30 20:27:28.130937113 +0000
	I0930 20:34:39.176169   44409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 20:34:39.181615   44409 command_runner.go:130] > Certificate will not expire
	I0930 20:34:39.181690   44409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 20:34:39.187194   44409 command_runner.go:130] > Certificate will not expire
	I0930 20:34:39.187255   44409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 20:34:39.192946   44409 command_runner.go:130] > Certificate will not expire
	I0930 20:34:39.193018   44409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 20:34:39.198291   44409 command_runner.go:130] > Certificate will not expire
	I0930 20:34:39.198463   44409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 20:34:39.203823   44409 command_runner.go:130] > Certificate will not expire
	I0930 20:34:39.203891   44409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 20:34:39.209906   44409 command_runner.go:130] > Certificate will not expire
	I0930 20:34:39.209985   44409 kubeadm.go:392] StartCluster: {Name:multinode-103579 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-103579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:34:39.210077   44409 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 20:34:39.210126   44409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 20:34:39.244119   44409 command_runner.go:130] > 4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec
	I0930 20:34:39.244143   44409 command_runner.go:130] > 39eb244acec3c9751b53fffd3102949734163c8b9530270bb170ba702e1cd2fe
	I0930 20:34:39.244149   44409 command_runner.go:130] > 0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78
	I0930 20:34:39.244196   44409 command_runner.go:130] > cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed
	I0930 20:34:39.244210   44409 command_runner.go:130] > bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711
	I0930 20:34:39.244216   44409 command_runner.go:130] > 80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863
	I0930 20:34:39.244227   44409 command_runner.go:130] > 25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306
	I0930 20:34:39.244239   44409 command_runner.go:130] > 9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785
	I0930 20:34:39.245623   44409 cri.go:89] found id: "4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec"
	I0930 20:34:39.245640   44409 cri.go:89] found id: "39eb244acec3c9751b53fffd3102949734163c8b9530270bb170ba702e1cd2fe"
	I0930 20:34:39.245644   44409 cri.go:89] found id: "0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78"
	I0930 20:34:39.245647   44409 cri.go:89] found id: "cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed"
	I0930 20:34:39.245650   44409 cri.go:89] found id: "bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711"
	I0930 20:34:39.245654   44409 cri.go:89] found id: "80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863"
	I0930 20:34:39.245656   44409 cri.go:89] found id: "25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306"
	I0930 20:34:39.245658   44409 cri.go:89] found id: "9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785"
	I0930 20:34:39.245661   44409 cri.go:89] found id: ""
	I0930 20:34:39.245698   44409 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.497505496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728586497477568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=145da484-abf1-44a3-8749-7dd7d98809e6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.498229204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=571992b1-d587-4000-a7c1-1c405ce45fb1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.498305580Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=571992b1-d587-4000-a7c1-1c405ce45fb1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.499062993Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1efa514a1839d6c19de59ead297fc8b01dbadad2701663bd1b23f5cb33f2e4a4,PodSandboxId:8aa99d529ad115c8600e78e936d7a72a8f0044f6204d747ffb725fbd407fc1cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727728519230342275,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb6c1c361bf4cf527748bfa59bca94dacb6779c506eef5330be08ee680de5d8,PodSandboxId:1d60728a0dd9a384a5e9b0539847da880ce3bd226fbfb430d5a1bad13a6ca1ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727728485660684306,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d109a5da6112f48b12c4fdce7ca5328f2254fa60babfc88676f3a279e018ecd,PodSandboxId:9745c762550dee5bdc872905de937d5225bb9f73af060f4645cc1b7b016bc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727728485589953216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b216cdc18ef72b2e8c0cde275f96b74f5a451fea3294520dcc3a5ee59c0b93,PodSandboxId:a8475f15ba470aec7327301e7f6b72c090f1fc07ffacbdff3a5a2c583fa0ea22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727728485522511041,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f005915b39da2769fc1c0c889208feb792bc405352af7cc3ae08e902e9fc4b0f,PodSandboxId:c32494dab655a08e2019c7ae5bbd41be6cb978f826ef6766ed7c1b2c067d2810,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727728485478810965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4131c58d0bf44d1303dc4391ae014e69c758eba279b4be21c3f4a473bed9d5,PodSandboxId:3fcfde39b4a7efadc1251b3c40db99526e55c3007985a79cd9bc64f406f2085f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727728481691841019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e58cad6d23e0be49e31a60ca54dad76f241fe59124086b531b42b93dd18e8a,PodSandboxId:f93877fe64e8a0dbdacdeb08bf787c2d860f3a670234700133485c883dd7af5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727728481637377689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6186b2a7a37ce6735a472a1591ff4137e2c1299aae5d9317852e7dfa79aaacd9,PodSandboxId:3c1fe4014918e36cef377a708aa2633c728055202691cb4cfa8e87648aae124f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727728481638178727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c696ac5ff6ecfcd8642f495e8c1946c568c1bebf2360280e1d4acc5ceaaba2,PodSandboxId:18859e9937fe831b85b17e2394a060f964a14ad0419fed4e876d4912fa2d5ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727728481600637174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a33a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9805d52edb30c2b44b5f802f59587a99803e805f55ba70004b3ecabc38c7e9ce,PodSandboxId:89ca8dd277b3eec6b63261217716c6254700f2c8b5102a207f0bcb793367f623,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727728163557050694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec,PodSandboxId:09a4b750bc3f4c0d15e716def52649a1bb78d034a4db3e3d688120e83b858eb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727728105122787470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39eb244acec3c9751b53fffd3102949734163c8b9530270bb170ba702e1cd2fe,PodSandboxId:806660b0a1105f7fba7e1a10685769a1d90a398c41c8ad27cf891984ea5483b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727728105043395217,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78,PodSandboxId:c9f44ae0002ed37f7487b40a31647dc28c91f7ad2bedc6e541977592f8268116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727728064631404452,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed,PodSandboxId:7fb3cdf08702f148cabaaa0e309eb8184575c942f237de3c77d8cc53c4aeb668,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727728063431806710,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20
-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711,PodSandboxId:43fd4ce185d2e1a7a4c956fb10f9f06536d1f77f8c1f5d943ac72029d955ea54,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727728052374672168,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883
015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863,PodSandboxId:1913067266c997e68460587d0a1b1ea75ba0718e2c43734fe37c0fdf75a04e38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727728052344829276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a3
3a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306,PodSandboxId:8a2c2a7613b9b99f8d9a3a4b39dd1232192dd6e9a19e9a82afa1e1290e42ce85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727728052292522260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785,PodSandboxId:d5cbd01102f7f062277ee18f1089f1a3ab960c046e1a57cbbf7451945964b141,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727728052245407035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=571992b1-d587-4000-a7c1-1c405ce45fb1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.549340367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5dd37aec-3f28-4466-8d78-6a8b38f077dc name=/runtime.v1.RuntimeService/Version
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.549453772Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5dd37aec-3f28-4466-8d78-6a8b38f077dc name=/runtime.v1.RuntimeService/Version
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.550866956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fce08326-d954-4cb5-b026-59f423cb6834 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.551485810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728586551452785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fce08326-d954-4cb5-b026-59f423cb6834 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.552248128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07084511-2d21-4eb1-81fe-267f053e9a2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.552325047Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07084511-2d21-4eb1-81fe-267f053e9a2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.552794640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1efa514a1839d6c19de59ead297fc8b01dbadad2701663bd1b23f5cb33f2e4a4,PodSandboxId:8aa99d529ad115c8600e78e936d7a72a8f0044f6204d747ffb725fbd407fc1cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727728519230342275,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb6c1c361bf4cf527748bfa59bca94dacb6779c506eef5330be08ee680de5d8,PodSandboxId:1d60728a0dd9a384a5e9b0539847da880ce3bd226fbfb430d5a1bad13a6ca1ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727728485660684306,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d109a5da6112f48b12c4fdce7ca5328f2254fa60babfc88676f3a279e018ecd,PodSandboxId:9745c762550dee5bdc872905de937d5225bb9f73af060f4645cc1b7b016bc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727728485589953216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b216cdc18ef72b2e8c0cde275f96b74f5a451fea3294520dcc3a5ee59c0b93,PodSandboxId:a8475f15ba470aec7327301e7f6b72c090f1fc07ffacbdff3a5a2c583fa0ea22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727728485522511041,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f005915b39da2769fc1c0c889208feb792bc405352af7cc3ae08e902e9fc4b0f,PodSandboxId:c32494dab655a08e2019c7ae5bbd41be6cb978f826ef6766ed7c1b2c067d2810,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727728485478810965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4131c58d0bf44d1303dc4391ae014e69c758eba279b4be21c3f4a473bed9d5,PodSandboxId:3fcfde39b4a7efadc1251b3c40db99526e55c3007985a79cd9bc64f406f2085f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727728481691841019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e58cad6d23e0be49e31a60ca54dad76f241fe59124086b531b42b93dd18e8a,PodSandboxId:f93877fe64e8a0dbdacdeb08bf787c2d860f3a670234700133485c883dd7af5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727728481637377689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6186b2a7a37ce6735a472a1591ff4137e2c1299aae5d9317852e7dfa79aaacd9,PodSandboxId:3c1fe4014918e36cef377a708aa2633c728055202691cb4cfa8e87648aae124f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727728481638178727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c696ac5ff6ecfcd8642f495e8c1946c568c1bebf2360280e1d4acc5ceaaba2,PodSandboxId:18859e9937fe831b85b17e2394a060f964a14ad0419fed4e876d4912fa2d5ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727728481600637174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a33a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9805d52edb30c2b44b5f802f59587a99803e805f55ba70004b3ecabc38c7e9ce,PodSandboxId:89ca8dd277b3eec6b63261217716c6254700f2c8b5102a207f0bcb793367f623,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727728163557050694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec,PodSandboxId:09a4b750bc3f4c0d15e716def52649a1bb78d034a4db3e3d688120e83b858eb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727728105122787470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39eb244acec3c9751b53fffd3102949734163c8b9530270bb170ba702e1cd2fe,PodSandboxId:806660b0a1105f7fba7e1a10685769a1d90a398c41c8ad27cf891984ea5483b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727728105043395217,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78,PodSandboxId:c9f44ae0002ed37f7487b40a31647dc28c91f7ad2bedc6e541977592f8268116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727728064631404452,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed,PodSandboxId:7fb3cdf08702f148cabaaa0e309eb8184575c942f237de3c77d8cc53c4aeb668,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727728063431806710,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20
-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711,PodSandboxId:43fd4ce185d2e1a7a4c956fb10f9f06536d1f77f8c1f5d943ac72029d955ea54,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727728052374672168,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883
015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863,PodSandboxId:1913067266c997e68460587d0a1b1ea75ba0718e2c43734fe37c0fdf75a04e38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727728052344829276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a3
3a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306,PodSandboxId:8a2c2a7613b9b99f8d9a3a4b39dd1232192dd6e9a19e9a82afa1e1290e42ce85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727728052292522260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785,PodSandboxId:d5cbd01102f7f062277ee18f1089f1a3ab960c046e1a57cbbf7451945964b141,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727728052245407035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07084511-2d21-4eb1-81fe-267f053e9a2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.603864200Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7a04786-f3f1-496b-a25b-449e39467e5a name=/runtime.v1.RuntimeService/Version
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.604012346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7a04786-f3f1-496b-a25b-449e39467e5a name=/runtime.v1.RuntimeService/Version
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.605392465Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04bf36da-008a-415c-8387-cff01c72d419 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.605935757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728586605909941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04bf36da-008a-415c-8387-cff01c72d419 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.606677043Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=791396cc-853b-4b95-81a0-3fc875598ed8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.606773331Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=791396cc-853b-4b95-81a0-3fc875598ed8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.607358702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1efa514a1839d6c19de59ead297fc8b01dbadad2701663bd1b23f5cb33f2e4a4,PodSandboxId:8aa99d529ad115c8600e78e936d7a72a8f0044f6204d747ffb725fbd407fc1cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727728519230342275,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb6c1c361bf4cf527748bfa59bca94dacb6779c506eef5330be08ee680de5d8,PodSandboxId:1d60728a0dd9a384a5e9b0539847da880ce3bd226fbfb430d5a1bad13a6ca1ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727728485660684306,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d109a5da6112f48b12c4fdce7ca5328f2254fa60babfc88676f3a279e018ecd,PodSandboxId:9745c762550dee5bdc872905de937d5225bb9f73af060f4645cc1b7b016bc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727728485589953216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b216cdc18ef72b2e8c0cde275f96b74f5a451fea3294520dcc3a5ee59c0b93,PodSandboxId:a8475f15ba470aec7327301e7f6b72c090f1fc07ffacbdff3a5a2c583fa0ea22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727728485522511041,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f005915b39da2769fc1c0c889208feb792bc405352af7cc3ae08e902e9fc4b0f,PodSandboxId:c32494dab655a08e2019c7ae5bbd41be6cb978f826ef6766ed7c1b2c067d2810,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727728485478810965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4131c58d0bf44d1303dc4391ae014e69c758eba279b4be21c3f4a473bed9d5,PodSandboxId:3fcfde39b4a7efadc1251b3c40db99526e55c3007985a79cd9bc64f406f2085f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727728481691841019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e58cad6d23e0be49e31a60ca54dad76f241fe59124086b531b42b93dd18e8a,PodSandboxId:f93877fe64e8a0dbdacdeb08bf787c2d860f3a670234700133485c883dd7af5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727728481637377689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6186b2a7a37ce6735a472a1591ff4137e2c1299aae5d9317852e7dfa79aaacd9,PodSandboxId:3c1fe4014918e36cef377a708aa2633c728055202691cb4cfa8e87648aae124f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727728481638178727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c696ac5ff6ecfcd8642f495e8c1946c568c1bebf2360280e1d4acc5ceaaba2,PodSandboxId:18859e9937fe831b85b17e2394a060f964a14ad0419fed4e876d4912fa2d5ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727728481600637174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a33a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9805d52edb30c2b44b5f802f59587a99803e805f55ba70004b3ecabc38c7e9ce,PodSandboxId:89ca8dd277b3eec6b63261217716c6254700f2c8b5102a207f0bcb793367f623,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727728163557050694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec,PodSandboxId:09a4b750bc3f4c0d15e716def52649a1bb78d034a4db3e3d688120e83b858eb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727728105122787470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39eb244acec3c9751b53fffd3102949734163c8b9530270bb170ba702e1cd2fe,PodSandboxId:806660b0a1105f7fba7e1a10685769a1d90a398c41c8ad27cf891984ea5483b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727728105043395217,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78,PodSandboxId:c9f44ae0002ed37f7487b40a31647dc28c91f7ad2bedc6e541977592f8268116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727728064631404452,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed,PodSandboxId:7fb3cdf08702f148cabaaa0e309eb8184575c942f237de3c77d8cc53c4aeb668,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727728063431806710,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20
-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711,PodSandboxId:43fd4ce185d2e1a7a4c956fb10f9f06536d1f77f8c1f5d943ac72029d955ea54,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727728052374672168,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883
015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863,PodSandboxId:1913067266c997e68460587d0a1b1ea75ba0718e2c43734fe37c0fdf75a04e38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727728052344829276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a3
3a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306,PodSandboxId:8a2c2a7613b9b99f8d9a3a4b39dd1232192dd6e9a19e9a82afa1e1290e42ce85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727728052292522260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785,PodSandboxId:d5cbd01102f7f062277ee18f1089f1a3ab960c046e1a57cbbf7451945964b141,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727728052245407035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=791396cc-853b-4b95-81a0-3fc875598ed8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.651639203Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c0862bf-2f3a-445d-a088-be080eb60b5c name=/runtime.v1.RuntimeService/Version
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.651738581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c0862bf-2f3a-445d-a088-be080eb60b5c name=/runtime.v1.RuntimeService/Version
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.653703424Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=905a5d2c-1c35-44be-a458-8a948b007509 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.654163632Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728586654138793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=905a5d2c-1c35-44be-a458-8a948b007509 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.654707083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e70d3953-60f3-468c-bd7d-d549f46a5f03 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.654792246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e70d3953-60f3-468c-bd7d-d549f46a5f03 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:36:26 multinode-103579 crio[2732]: time="2024-09-30 20:36:26.655216417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1efa514a1839d6c19de59ead297fc8b01dbadad2701663bd1b23f5cb33f2e4a4,PodSandboxId:8aa99d529ad115c8600e78e936d7a72a8f0044f6204d747ffb725fbd407fc1cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727728519230342275,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb6c1c361bf4cf527748bfa59bca94dacb6779c506eef5330be08ee680de5d8,PodSandboxId:1d60728a0dd9a384a5e9b0539847da880ce3bd226fbfb430d5a1bad13a6ca1ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727728485660684306,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d109a5da6112f48b12c4fdce7ca5328f2254fa60babfc88676f3a279e018ecd,PodSandboxId:9745c762550dee5bdc872905de937d5225bb9f73af060f4645cc1b7b016bc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727728485589953216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b216cdc18ef72b2e8c0cde275f96b74f5a451fea3294520dcc3a5ee59c0b93,PodSandboxId:a8475f15ba470aec7327301e7f6b72c090f1fc07ffacbdff3a5a2c583fa0ea22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727728485522511041,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f005915b39da2769fc1c0c889208feb792bc405352af7cc3ae08e902e9fc4b0f,PodSandboxId:c32494dab655a08e2019c7ae5bbd41be6cb978f826ef6766ed7c1b2c067d2810,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727728485478810965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4131c58d0bf44d1303dc4391ae014e69c758eba279b4be21c3f4a473bed9d5,PodSandboxId:3fcfde39b4a7efadc1251b3c40db99526e55c3007985a79cd9bc64f406f2085f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727728481691841019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e58cad6d23e0be49e31a60ca54dad76f241fe59124086b531b42b93dd18e8a,PodSandboxId:f93877fe64e8a0dbdacdeb08bf787c2d860f3a670234700133485c883dd7af5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727728481637377689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6186b2a7a37ce6735a472a1591ff4137e2c1299aae5d9317852e7dfa79aaacd9,PodSandboxId:3c1fe4014918e36cef377a708aa2633c728055202691cb4cfa8e87648aae124f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727728481638178727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c696ac5ff6ecfcd8642f495e8c1946c568c1bebf2360280e1d4acc5ceaaba2,PodSandboxId:18859e9937fe831b85b17e2394a060f964a14ad0419fed4e876d4912fa2d5ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727728481600637174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a33a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9805d52edb30c2b44b5f802f59587a99803e805f55ba70004b3ecabc38c7e9ce,PodSandboxId:89ca8dd277b3eec6b63261217716c6254700f2c8b5102a207f0bcb793367f623,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727728163557050694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec,PodSandboxId:09a4b750bc3f4c0d15e716def52649a1bb78d034a4db3e3d688120e83b858eb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727728105122787470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39eb244acec3c9751b53fffd3102949734163c8b9530270bb170ba702e1cd2fe,PodSandboxId:806660b0a1105f7fba7e1a10685769a1d90a398c41c8ad27cf891984ea5483b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727728105043395217,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78,PodSandboxId:c9f44ae0002ed37f7487b40a31647dc28c91f7ad2bedc6e541977592f8268116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727728064631404452,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed,PodSandboxId:7fb3cdf08702f148cabaaa0e309eb8184575c942f237de3c77d8cc53c4aeb668,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727728063431806710,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20
-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711,PodSandboxId:43fd4ce185d2e1a7a4c956fb10f9f06536d1f77f8c1f5d943ac72029d955ea54,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727728052374672168,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883
015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863,PodSandboxId:1913067266c997e68460587d0a1b1ea75ba0718e2c43734fe37c0fdf75a04e38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727728052344829276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a3
3a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306,PodSandboxId:8a2c2a7613b9b99f8d9a3a4b39dd1232192dd6e9a19e9a82afa1e1290e42ce85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727728052292522260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785,PodSandboxId:d5cbd01102f7f062277ee18f1089f1a3ab960c046e1a57cbbf7451945964b141,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727728052245407035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e70d3953-60f3-468c-bd7d-d549f46a5f03 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1efa514a1839d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   8aa99d529ad11       busybox-7dff88458-vxgwt
	bbb6c1c361bf4       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   1d60728a0dd9a       kindnet-4m4kb
	2d109a5da6112       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   9745c762550de       coredns-7c65d6cfc9-w95cn
	49b216cdc18ef       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   a8475f15ba470       kube-proxy-9dlpd
	f005915b39da2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   c32494dab655a       storage-provisioner
	4a4131c58d0bf       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   3fcfde39b4a7e       kube-scheduler-multinode-103579
	6186b2a7a37ce       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   3c1fe4014918e       etcd-multinode-103579
	52e58cad6d23e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   f93877fe64e8a       kube-controller-manager-multinode-103579
	d8c696ac5ff6e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   18859e9937fe8       kube-apiserver-multinode-103579
	9805d52edb30c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   89ca8dd277b3e       busybox-7dff88458-vxgwt
	4bc848d9234a8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   09a4b750bc3f4       coredns-7c65d6cfc9-w95cn
	39eb244acec3c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   806660b0a1105       storage-provisioner
	0974451661f07       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   c9f44ae0002ed       kube-proxy-9dlpd
	cacfb622468b0       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   7fb3cdf08702f       kindnet-4m4kb
	bc4433f691239       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   43fd4ce185d2e       kube-controller-manager-multinode-103579
	80432178b988b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   1913067266c99       kube-apiserver-multinode-103579
	25b434fd4ab00       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   8a2c2a7613b9b       etcd-multinode-103579
	9596d6363e892       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   d5cbd01102f7f       kube-scheduler-multinode-103579
	
	
	==> coredns [2d109a5da6112f48b12c4fdce7ca5328f2254fa60babfc88676f3a279e018ecd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42217 - 24013 "HINFO IN 4147206565910182645.8023849442997298152. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012859188s
	
	
	==> coredns [4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec] <==
	[INFO] 10.244.0.3:56430 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001972052s
	[INFO] 10.244.0.3:54196 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061891s
	[INFO] 10.244.0.3:44650 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000040748s
	[INFO] 10.244.0.3:46731 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001515008s
	[INFO] 10.244.0.3:33663 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088627s
	[INFO] 10.244.0.3:49750 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000041423s
	[INFO] 10.244.0.3:39612 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000038332s
	[INFO] 10.244.1.2:59098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143481s
	[INFO] 10.244.1.2:56880 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082918s
	[INFO] 10.244.1.2:49241 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066136s
	[INFO] 10.244.1.2:46960 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064178s
	[INFO] 10.244.0.3:56075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123559s
	[INFO] 10.244.0.3:37605 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009211s
	[INFO] 10.244.0.3:45177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079246s
	[INFO] 10.244.0.3:47750 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071686s
	[INFO] 10.244.1.2:51863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144718s
	[INFO] 10.244.1.2:34553 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000607405s
	[INFO] 10.244.1.2:60118 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159285s
	[INFO] 10.244.1.2:57388 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00017183s
	[INFO] 10.244.0.3:57017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166069s
	[INFO] 10.244.0.3:36642 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001969s
	[INFO] 10.244.0.3:33680 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007497s
	[INFO] 10.244.0.3:39556 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103466s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-103579
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-103579
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=multinode-103579
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T20_27_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:27:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-103579
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:36:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:34:45 +0000   Mon, 30 Sep 2024 20:27:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:34:45 +0000   Mon, 30 Sep 2024 20:27:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:34:45 +0000   Mon, 30 Sep 2024 20:27:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:34:45 +0000   Mon, 30 Sep 2024 20:28:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    multinode-103579
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 16f98965064b4362bf3244a75d525e39
	  System UUID:                16f98965-064b-4362-bf32-44a75d525e39
	  Boot ID:                    0c7bbc54-a3ed-4fe0-9039-f207a716caf8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vxgwt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 coredns-7c65d6cfc9-w95cn                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m45s
	  kube-system                 etcd-multinode-103579                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m51s
	  kube-system                 kindnet-4m4kb                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m45s
	  kube-system                 kube-apiserver-multinode-103579             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m50s
	  kube-system                 kube-controller-manager-multinode-103579    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m50s
	  kube-system                 kube-proxy-9dlpd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m45s
	  kube-system                 kube-scheduler-multinode-103579             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m51s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m42s                kube-proxy       
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  NodeHasSufficientPID     8m50s                kubelet          Node multinode-103579 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m50s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m50s                kubelet          Node multinode-103579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m50s                kubelet          Node multinode-103579 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m50s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m46s                node-controller  Node multinode-103579 event: Registered Node multinode-103579 in Controller
	  Normal  NodeReady                8m3s                 kubelet          Node multinode-103579 status is now: NodeReady
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s (x8 over 107s)  kubelet          Node multinode-103579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 107s)  kubelet          Node multinode-103579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x7 over 107s)  kubelet          Node multinode-103579 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                  node-controller  Node multinode-103579 event: Registered Node multinode-103579 in Controller
	
	
	Name:               multinode-103579-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-103579-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=multinode-103579
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_35_27_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:35:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-103579-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:36:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:35:57 +0000   Mon, 30 Sep 2024 20:35:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:35:57 +0000   Mon, 30 Sep 2024 20:35:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:35:57 +0000   Mon, 30 Sep 2024 20:35:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:35:57 +0000   Mon, 30 Sep 2024 20:35:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    multinode-103579-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 948c8778bf224e5dbd87a5ddef6634c2
	  System UUID:                948c8778-bf22-4e5d-bd87-a5ddef6634c2
	  Boot ID:                    3d6a4ceb-0a59-4f88-9395-96b39349ec4f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7tbhk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kindnet-phlcl              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m32s
	  kube-system                 kube-proxy-b9f89           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m25s                  kube-proxy  
	  Normal  Starting                 55s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m32s (x2 over 7m32s)  kubelet     Node multinode-103579-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m32s (x2 over 7m32s)  kubelet     Node multinode-103579-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m32s (x2 over 7m32s)  kubelet     Node multinode-103579-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m32s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m11s                  kubelet     Node multinode-103579-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-103579-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-103579-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-103579-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-103579-m02 status is now: NodeReady
	
	
	Name:               multinode-103579-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-103579-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=multinode-103579
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_36_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:36:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-103579-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:36:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:36:23 +0000   Mon, 30 Sep 2024 20:36:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:36:23 +0000   Mon, 30 Sep 2024 20:36:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:36:23 +0000   Mon, 30 Sep 2024 20:36:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:36:23 +0000   Mon, 30 Sep 2024 20:36:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    multinode-103579-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 21b81d15dd284d0483819cc54bf2c1c5
	  System UUID:                21b81d15-dd28-4d04-8381-9cc54bf2c1c5
	  Boot ID:                    19c9eef9-5eb9-40ca-a74c-58a364fb71f4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ns772       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m36s
	  kube-system                 kube-proxy-lpb89    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m30s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m41s                  kube-proxy  
	  Normal  NodeHasNoDiskPressure    6m36s (x2 over 6m36s)  kubelet     Node multinode-103579-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x2 over 6m36s)  kubelet     Node multinode-103579-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m36s (x2 over 6m36s)  kubelet     Node multinode-103579-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                6m16s                  kubelet     Node multinode-103579-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m46s                  kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m46s)  kubelet     Node multinode-103579-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m45s (x2 over 5m46s)  kubelet     Node multinode-103579-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m45s (x2 over 5m46s)  kubelet     Node multinode-103579-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m27s                  kubelet     Node multinode-103579-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-103579-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-103579-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-103579-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-103579-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.056585] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060471] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.162790] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.140968] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.279412] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.836318] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.430478] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.066590] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.994488] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.084614] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.677350] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[  +0.100957] kauditd_printk_skb: 18 callbacks suppressed
	[Sep30 20:28] kauditd_printk_skb: 69 callbacks suppressed
	[Sep30 20:29] kauditd_printk_skb: 12 callbacks suppressed
	[Sep30 20:34] systemd-fstab-generator[2650]: Ignoring "noauto" option for root device
	[  +0.151813] systemd-fstab-generator[2662]: Ignoring "noauto" option for root device
	[  +0.179087] systemd-fstab-generator[2682]: Ignoring "noauto" option for root device
	[  +0.147072] systemd-fstab-generator[2694]: Ignoring "noauto" option for root device
	[  +0.272256] systemd-fstab-generator[2722]: Ignoring "noauto" option for root device
	[  +0.662658] systemd-fstab-generator[2815]: Ignoring "noauto" option for root device
	[  +2.058270] systemd-fstab-generator[2934]: Ignoring "noauto" option for root device
	[  +4.708226] kauditd_printk_skb: 184 callbacks suppressed
	[Sep30 20:35] systemd-fstab-generator[3784]: Ignoring "noauto" option for root device
	[  +0.100078] kauditd_printk_skb: 34 callbacks suppressed
	[ +16.932038] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306] <==
	{"level":"info","ts":"2024-09-30T20:27:33.508617Z","caller":"traceutil/trace.go:171","msg":"trace[1152036888] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:1; }","duration":"135.123297ms","start":"2024-09-30T20:27:33.373484Z","end":"2024-09-30T20:27:33.508607Z","steps":["trace[1152036888] 'range keys from in-memory index tree'  (duration: 126.463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T20:27:33.500091Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.658202ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-09-30T20:27:33.508875Z","caller":"traceutil/trace.go:171","msg":"trace[2324633] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:1; }","duration":"117.439373ms","start":"2024-09-30T20:27:33.391429Z","end":"2024-09-30T20:27:33.508868Z","steps":["trace[2324633] 'count revisions from in-memory index tree'  (duration: 108.625259ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T20:27:33.500117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.749116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-09-30T20:27:33.509195Z","caller":"traceutil/trace.go:171","msg":"trace[1077031957] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:1; }","duration":"117.82279ms","start":"2024-09-30T20:27:33.391364Z","end":"2024-09-30T20:27:33.509187Z","steps":["trace[1077031957] 'range keys from in-memory index tree'  (duration: 108.644751ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:27:47.808705Z","caller":"traceutil/trace.go:171","msg":"trace[1664084647] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"112.203367ms","start":"2024-09-30T20:27:47.696483Z","end":"2024-09-30T20:27:47.808686Z","steps":["trace[1664084647] 'process raft request'  (duration: 111.812806ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:28:55.785866Z","caller":"traceutil/trace.go:171","msg":"trace[714959990] linearizableReadLoop","detail":"{readStateIndex:474; appliedIndex:473; }","duration":"129.973954ms","start":"2024-09-30T20:28:55.655862Z","end":"2024-09-30T20:28:55.785836Z","steps":["trace[714959990] 'read index received'  (duration: 111.634105ms)","trace[714959990] 'applied index is now lower than readState.Index'  (duration: 18.338709ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T20:28:55.786742Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.871296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-103579-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T20:28:55.786820Z","caller":"traceutil/trace.go:171","msg":"trace[1690217488] range","detail":"{range_begin:/registry/minions/multinode-103579-m02; range_end:; response_count:0; response_revision:446; }","duration":"130.970696ms","start":"2024-09-30T20:28:55.655840Z","end":"2024-09-30T20:28:55.786811Z","steps":["trace[1690217488] 'agreement among raft nodes before linearized reading'  (duration: 130.835675ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:29:01.495374Z","caller":"traceutil/trace.go:171","msg":"trace[268712338] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"118.074643ms","start":"2024-09-30T20:29:01.377282Z","end":"2024-09-30T20:29:01.495357Z","steps":["trace[268712338] 'process raft request'  (duration: 117.726637ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:29:51.336810Z","caller":"traceutil/trace.go:171","msg":"trace[491805500] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"105.450132ms","start":"2024-09-30T20:29:51.231269Z","end":"2024-09-30T20:29:51.336719Z","steps":["trace[491805500] 'process raft request'  (duration: 105.325029ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:29:51.463179Z","caller":"traceutil/trace.go:171","msg":"trace[759165473] linearizableReadLoop","detail":"{readStateIndex:625; appliedIndex:624; }","duration":"117.57861ms","start":"2024-09-30T20:29:51.345586Z","end":"2024-09-30T20:29:51.463165Z","steps":["trace[759165473] 'read index received'  (duration: 115.553229ms)","trace[759165473] 'applied index is now lower than readState.Index'  (duration: 2.024908ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T20:29:51.463301Z","caller":"traceutil/trace.go:171","msg":"trace[671129766] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"121.793555ms","start":"2024-09-30T20:29:51.341498Z","end":"2024-09-30T20:29:51.463292Z","steps":["trace[671129766] 'process raft request'  (duration: 119.687126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T20:29:51.463773Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.03568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-103579-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T20:29:51.463839Z","caller":"traceutil/trace.go:171","msg":"trace[489517102] range","detail":"{range_begin:/registry/minions/multinode-103579-m03; range_end:; response_count:0; response_revision:584; }","duration":"118.24453ms","start":"2024-09-30T20:29:51.345583Z","end":"2024-09-30T20:29:51.463827Z","steps":["trace[489517102] 'agreement among raft nodes before linearized reading'  (duration: 117.977029ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:33:06.073482Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-30T20:33:06.073618Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-103579","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.58:2380"],"advertise-client-urls":["https://192.168.39.58:2379"]}
	{"level":"warn","ts":"2024-09-30T20:33:06.073752Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:33:06.073862Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:33:06.159705Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.58:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:33:06.159857Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.58:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-30T20:33:06.160235Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ded7f9817c909548","current-leader-member-id":"ded7f9817c909548"}
	{"level":"info","ts":"2024-09-30T20:33:06.163247Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-09-30T20:33:06.163463Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-09-30T20:33:06.163526Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-103579","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.58:2380"],"advertise-client-urls":["https://192.168.39.58:2379"]}
	
	
	==> etcd [6186b2a7a37ce6735a472a1591ff4137e2c1299aae5d9317852e7dfa79aaacd9] <==
	{"level":"info","ts":"2024-09-30T20:34:42.179353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 switched to configuration voters=(16057577330948740424)"}
	{"level":"info","ts":"2024-09-30T20:34:42.190613Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","added-peer-id":"ded7f9817c909548","added-peer-peer-urls":["https://192.168.39.58:2380"]}
	{"level":"info","ts":"2024-09-30T20:34:42.191044Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T20:34:42.191133Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T20:34:42.200716Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-30T20:34:42.223805Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ded7f9817c909548","initial-advertise-peer-urls":["https://192.168.39.58:2380"],"listen-peer-urls":["https://192.168.39.58:2380"],"advertise-client-urls":["https://192.168.39.58:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.58:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-30T20:34:42.224058Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-30T20:34:42.203159Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-09-30T20:34:42.228521Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-09-30T20:34:43.384042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T20:34:43.384098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T20:34:43.384144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgPreVoteResp from ded7f9817c909548 at term 2"}
	{"level":"info","ts":"2024-09-30T20:34:43.384162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became candidate at term 3"}
	{"level":"info","ts":"2024-09-30T20:34:43.384167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgVoteResp from ded7f9817c909548 at term 3"}
	{"level":"info","ts":"2024-09-30T20:34:43.384176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became leader at term 3"}
	{"level":"info","ts":"2024-09-30T20:34:43.384184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ded7f9817c909548 elected leader ded7f9817c909548 at term 3"}
	{"level":"info","ts":"2024-09-30T20:34:43.391160Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ded7f9817c909548","local-member-attributes":"{Name:multinode-103579 ClientURLs:[https://192.168.39.58:2379]}","request-path":"/0/members/ded7f9817c909548/attributes","cluster-id":"91c640bc00cd2aea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T20:34:43.391410Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:34:43.391866Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:34:43.392791Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:34:43.393351Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T20:34:43.393395Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T20:34:43.393837Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T20:34:43.394596Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:34:43.395483Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.58:2379"}
	
	
	==> kernel <==
	 20:36:27 up 9 min,  0 users,  load average: 0.05, 0.14, 0.09
	Linux multinode-103579 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bbb6c1c361bf4cf527748bfa59bca94dacb6779c506eef5330be08ee680de5d8] <==
	I0930 20:35:56.516144       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:35:56.516304       1 main.go:299] handling current node
	I0930 20:35:56.516345       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:35:56.516411       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:35:56.516589       1 main.go:295] Handling node with IPs: map[192.168.39.237:{}]
	I0930 20:35:56.516631       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.5.0/24] 
	I0930 20:36:06.515411       1 main.go:295] Handling node with IPs: map[192.168.39.237:{}]
	I0930 20:36:06.515515       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.2.0/24] 
	I0930 20:36:06.515711       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.237 Flags: [] Table: 0} 
	I0930 20:36:06.516412       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:36:06.516431       1 main.go:299] handling current node
	I0930 20:36:06.516458       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:36:06.516466       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:36:16.515231       1 main.go:295] Handling node with IPs: map[192.168.39.237:{}]
	I0930 20:36:16.515279       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.2.0/24] 
	I0930 20:36:16.515469       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:36:16.515488       1 main.go:299] handling current node
	I0930 20:36:16.515505       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:36:16.515510       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:36:26.517117       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:36:26.517180       1 main.go:299] handling current node
	I0930 20:36:26.517221       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:36:26.517230       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:36:26.517432       1 main.go:295] Handling node with IPs: map[192.168.39.237:{}]
	I0930 20:36:26.517466       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed] <==
	I0930 20:32:24.415308       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.5.0/24] 
	I0930 20:32:34.406844       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:32:34.407011       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:32:34.407172       1 main.go:295] Handling node with IPs: map[192.168.39.237:{}]
	I0930 20:32:34.407198       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.5.0/24] 
	I0930 20:32:34.407268       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:32:34.407290       1 main.go:299] handling current node
	I0930 20:32:44.407318       1 main.go:295] Handling node with IPs: map[192.168.39.237:{}]
	I0930 20:32:44.407424       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.5.0/24] 
	I0930 20:32:44.407571       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:32:44.407604       1 main.go:299] handling current node
	I0930 20:32:44.407627       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:32:44.407653       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:32:54.410730       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:32:54.410842       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:32:54.411066       1 main.go:295] Handling node with IPs: map[192.168.39.237:{}]
	I0930 20:32:54.411097       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.5.0/24] 
	I0930 20:32:54.411190       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:32:54.411222       1 main.go:299] handling current node
	I0930 20:33:04.415180       1 main.go:295] Handling node with IPs: map[192.168.39.237:{}]
	I0930 20:33:04.415329       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.5.0/24] 
	I0930 20:33:04.415476       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:33:04.415501       1 main.go:299] handling current node
	I0930 20:33:04.415527       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:33:04.415543       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863] <==
	I0930 20:27:36.905795       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 20:27:37.501803       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 20:27:37.516512       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0930 20:27:37.531904       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 20:27:42.408194       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0930 20:27:42.659767       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0930 20:29:24.409411       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35044: use of closed network connection
	E0930 20:29:24.577200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35064: use of closed network connection
	E0930 20:29:24.743619       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35070: use of closed network connection
	E0930 20:29:24.915054       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35090: use of closed network connection
	E0930 20:29:25.080858       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35120: use of closed network connection
	E0930 20:29:25.240632       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35142: use of closed network connection
	E0930 20:29:25.510701       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35172: use of closed network connection
	E0930 20:29:25.669211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35196: use of closed network connection
	E0930 20:29:25.844361       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35218: use of closed network connection
	E0930 20:29:26.020863       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35238: use of closed network connection
	I0930 20:33:06.073538       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0930 20:33:06.082768       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.087339       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.087654       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.088705       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.089544       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.089583       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.089615       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.089648       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d8c696ac5ff6ecfcd8642f495e8c1946c568c1bebf2360280e1d4acc5ceaaba2] <==
	I0930 20:34:44.873745       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 20:34:44.873856       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 20:34:44.874587       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0930 20:34:44.907184       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 20:34:44.927013       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0930 20:34:44.927200       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0930 20:34:44.927345       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 20:34:44.929234       1 aggregator.go:171] initial CRD sync complete...
	I0930 20:34:44.929305       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 20:34:44.929329       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 20:34:44.929353       1 cache.go:39] Caches are synced for autoregister controller
	I0930 20:34:44.944033       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 20:34:44.944523       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 20:34:44.944556       1 policy_source.go:224] refreshing policies
	I0930 20:34:44.947541       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0930 20:34:44.958031       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0930 20:34:44.969478       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0930 20:34:45.776050       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0930 20:34:46.952915       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 20:34:47.078784       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 20:34:47.092293       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 20:34:47.186617       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0930 20:34:47.202018       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0930 20:34:48.187504       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 20:34:48.385684       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [52e58cad6d23e0be49e31a60ca54dad76f241fe59124086b531b42b93dd18e8a] <==
	I0930 20:35:46.313389       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m02"
	I0930 20:35:46.313514       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m02"
	I0930 20:35:46.330631       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m02"
	I0930 20:35:46.341383       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="121.073µs"
	I0930 20:35:46.356809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.605µs"
	I0930 20:35:48.407799       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m02"
	I0930 20:35:50.838522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.762988ms"
	I0930 20:35:50.838643       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.752µs"
	I0930 20:35:57.176423       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m02"
	I0930 20:36:04.072180       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:04.090860       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:04.321021       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:04.321218       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m02"
	I0930 20:36:05.448304       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-103579-m03\" does not exist"
	I0930 20:36:05.451065       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m02"
	I0930 20:36:05.477232       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-103579-m03" podCIDRs=["10.244.2.0/24"]
	I0930 20:36:05.477273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:05.477311       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:05.857620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:06.218649       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:08.484713       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:15.709630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:23.608314       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m02"
	I0930 20:36:23.609885       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:23.623063       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	
	
	==> kube-controller-manager [bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711] <==
	I0930 20:30:40.775713       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m02"
	I0930 20:30:40.776023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:42.096599       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m02"
	I0930 20:30:42.096598       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-103579-m03\" does not exist"
	I0930 20:30:42.106726       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-103579-m03" podCIDRs=["10.244.5.0/24"]
	I0930 20:30:42.106855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:42.108179       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:42.125796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:42.315291       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:42.640057       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:46.968725       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:52.490337       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:31:00.522484       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:31:00.522566       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m03"
	I0930 20:31:00.537773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:31:01.925169       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:31:41.943495       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m02"
	I0930 20:31:41.944409       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m03"
	I0930 20:31:41.965115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m02"
	I0930 20:31:42.033932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.238546ms"
	I0930 20:31:42.035520       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="152.572µs"
	I0930 20:31:47.016344       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:31:47.033328       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:31:47.052045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m02"
	I0930 20:31:57.130249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	
	
	==> kube-proxy [0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:27:44.781686       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 20:27:44.790585       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.58"]
	E0930 20:27:44.790802       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:27:44.820390       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:27:44.820432       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:27:44.820455       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:27:44.823622       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:27:44.824233       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:27:44.824304       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:27:44.825819       1 config.go:199] "Starting service config controller"
	I0930 20:27:44.825857       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:27:44.825902       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:27:44.825907       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:27:44.826338       1 config.go:328] "Starting node config controller"
	I0930 20:27:44.826413       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:27:44.926895       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 20:27:44.927057       1 shared_informer.go:320] Caches are synced for node config
	I0930 20:27:44.927065       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [49b216cdc18ef72b2e8c0cde275f96b74f5a451fea3294520dcc3a5ee59c0b93] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:34:45.851329       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 20:34:45.869450       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.58"]
	E0930 20:34:45.869584       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:34:45.927119       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:34:45.928217       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:34:45.928338       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:34:45.932141       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:34:45.932459       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:34:45.933245       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:34:45.936182       1 config.go:199] "Starting service config controller"
	I0930 20:34:45.936240       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:34:45.936278       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:34:45.936294       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:34:45.938798       1 config.go:328] "Starting node config controller"
	I0930 20:34:45.938829       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:34:46.036366       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 20:34:46.036431       1 shared_informer.go:320] Caches are synced for service config
	I0930 20:34:46.039357       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4a4131c58d0bf44d1303dc4391ae014e69c758eba279b4be21c3f4a473bed9d5] <==
	I0930 20:34:43.278737       1 serving.go:386] Generated self-signed cert in-memory
	W0930 20:34:44.821123       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 20:34:44.821259       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 20:34:44.821298       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 20:34:44.821369       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 20:34:44.866862       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 20:34:44.869028       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:34:44.878364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 20:34:44.878534       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 20:34:44.878583       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 20:34:44.878606       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 20:34:44.981396       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785] <==
	E0930 20:27:35.754082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:35.964703       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 20:27:35.964781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.019316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 20:27:36.019367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.083612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 20:27:36.083668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.147678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 20:27:36.147765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.173415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0930 20:27:36.173539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.195722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 20:27:36.195823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.210467       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 20:27:36.210650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.239904       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 20:27:36.239942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.260325       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 20:27:36.260543       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 20:27:36.293251       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 20:27:36.293551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.322286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 20:27:36.322365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 20:27:39.524616       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 20:33:06.083774       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 30 20:34:51 multinode-103579 kubelet[2941]: E0930 20:34:51.009760    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728491009041726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:34:51 multinode-103579 kubelet[2941]: E0930 20:34:51.010506    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728491009041726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:35:01 multinode-103579 kubelet[2941]: E0930 20:35:01.012728    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728501012466093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:35:01 multinode-103579 kubelet[2941]: E0930 20:35:01.012753    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728501012466093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:35:11 multinode-103579 kubelet[2941]: E0930 20:35:11.015351    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728511014029384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:35:11 multinode-103579 kubelet[2941]: E0930 20:35:11.015395    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728511014029384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:35:21 multinode-103579 kubelet[2941]: E0930 20:35:21.018132    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728521016395280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:35:21 multinode-103579 kubelet[2941]: E0930 20:35:21.018455    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728521016395280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:35:31 multinode-103579 kubelet[2941]: E0930 20:35:31.025389    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728531021375362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:35:31 multinode-103579 kubelet[2941]: E0930 20:35:31.025469    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728531021375362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:35:41 multinode-103579 kubelet[2941]: E0930 20:35:41.027097    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728541026701273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:35:41 multinode-103579 kubelet[2941]: E0930 20:35:41.027506    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728541026701273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:35:41 multinode-103579 kubelet[2941]: E0930 20:35:41.029486    2941 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 20:35:41 multinode-103579 kubelet[2941]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 20:35:41 multinode-103579 kubelet[2941]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 20:35:41 multinode-103579 kubelet[2941]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 20:35:41 multinode-103579 kubelet[2941]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 20:35:51 multinode-103579 kubelet[2941]: E0930 20:35:51.031799    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728551030490157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:35:51 multinode-103579 kubelet[2941]: E0930 20:35:51.032425    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728551030490157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:36:01 multinode-103579 kubelet[2941]: E0930 20:36:01.035151    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728561034622160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:36:01 multinode-103579 kubelet[2941]: E0930 20:36:01.035299    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728561034622160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:36:11 multinode-103579 kubelet[2941]: E0930 20:36:11.039101    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728571038314565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:36:11 multinode-103579 kubelet[2941]: E0930 20:36:11.039144    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728571038314565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:36:21 multinode-103579 kubelet[2941]: E0930 20:36:21.045401    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728581043933231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:36:21 multinode-103579 kubelet[2941]: E0930 20:36:21.045435    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728581043933231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 20:36:26.179324   45547 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19736-7672/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-103579 -n multinode-103579
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-103579 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (324.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 stop
E0930 20:36:32.001857   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:38:28.936099   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-103579 stop: exit status 82 (2m0.474244438s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-103579-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-103579 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-103579 status: (18.671909317s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-103579 status --alsologtostderr: (3.391895539s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-103579 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-103579 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-103579 -n multinode-103579
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-103579 logs -n 25: (1.40733372s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-103579 cp multinode-103579-m02:/home/docker/cp-test.txt                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579:/home/docker/cp-test_multinode-103579-m02_multinode-103579.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n multinode-103579 sudo cat                                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | /home/docker/cp-test_multinode-103579-m02_multinode-103579.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-103579 cp multinode-103579-m02:/home/docker/cp-test.txt                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03:/home/docker/cp-test_multinode-103579-m02_multinode-103579-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n multinode-103579-m03 sudo cat                                   | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | /home/docker/cp-test_multinode-103579-m02_multinode-103579-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-103579 cp testdata/cp-test.txt                                                | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-103579 cp multinode-103579-m03:/home/docker/cp-test.txt                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3699104417/001/cp-test_multinode-103579-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-103579 cp multinode-103579-m03:/home/docker/cp-test.txt                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579:/home/docker/cp-test_multinode-103579-m03_multinode-103579.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n multinode-103579 sudo cat                                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | /home/docker/cp-test_multinode-103579-m03_multinode-103579.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-103579 cp multinode-103579-m03:/home/docker/cp-test.txt                       | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m02:/home/docker/cp-test_multinode-103579-m03_multinode-103579-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n multinode-103579-m02 sudo cat                                   | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | /home/docker/cp-test_multinode-103579-m03_multinode-103579-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-103579 node stop m03                                                          | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	| node    | multinode-103579 node start                                                             | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:31 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-103579                                                                | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:31 UTC |                     |
	| stop    | -p multinode-103579                                                                     | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:31 UTC |                     |
	| start   | -p multinode-103579                                                                     | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:33 UTC | 30 Sep 24 20:36 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-103579                                                                | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:36 UTC |                     |
	| node    | multinode-103579 node delete                                                            | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:36 UTC | 30 Sep 24 20:36 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-103579 stop                                                                   | multinode-103579 | jenkins | v1.34.0 | 30 Sep 24 20:36 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 20:33:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 20:33:05.104361   44409 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:33:05.104647   44409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:33:05.104657   44409 out.go:358] Setting ErrFile to fd 2...
	I0930 20:33:05.104672   44409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:33:05.104864   44409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:33:05.105483   44409 out.go:352] Setting JSON to false
	I0930 20:33:05.106393   44409 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4528,"bootTime":1727723857,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 20:33:05.106497   44409 start.go:139] virtualization: kvm guest
	I0930 20:33:05.108497   44409 out.go:177] * [multinode-103579] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 20:33:05.109887   44409 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 20:33:05.109902   44409 notify.go:220] Checking for updates...
	I0930 20:33:05.112146   44409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 20:33:05.113418   44409 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:33:05.114662   44409 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:33:05.115918   44409 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 20:33:05.117214   44409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 20:33:05.118865   44409 config.go:182] Loaded profile config "multinode-103579": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:33:05.118983   44409 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 20:33:05.119481   44409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:33:05.119558   44409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:33:05.136881   44409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42787
	I0930 20:33:05.137331   44409 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:33:05.137961   44409 main.go:141] libmachine: Using API Version  1
	I0930 20:33:05.137987   44409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:33:05.138379   44409 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:33:05.138726   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:33:05.176162   44409 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 20:33:05.177421   44409 start.go:297] selected driver: kvm2
	I0930 20:33:05.177441   44409 start.go:901] validating driver "kvm2" against &{Name:multinode-103579 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-103579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:33:05.177598   44409 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 20:33:05.177978   44409 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:33:05.178076   44409 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 20:33:05.193853   44409 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 20:33:05.194577   44409 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:33:05.194617   44409 cni.go:84] Creating CNI manager for ""
	I0930 20:33:05.194678   44409 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0930 20:33:05.194755   44409 start.go:340] cluster config:
	{Name:multinode-103579 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-103579 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:33:05.194901   44409 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:33:05.197678   44409 out.go:177] * Starting "multinode-103579" primary control-plane node in "multinode-103579" cluster
	I0930 20:33:05.199048   44409 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:33:05.199118   44409 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 20:33:05.199135   44409 cache.go:56] Caching tarball of preloaded images
	I0930 20:33:05.199236   44409 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:33:05.199248   44409 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:33:05.199390   44409 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/config.json ...
	I0930 20:33:05.199695   44409 start.go:360] acquireMachinesLock for multinode-103579: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:33:05.199751   44409 start.go:364] duration metric: took 33.009µs to acquireMachinesLock for "multinode-103579"
	I0930 20:33:05.199771   44409 start.go:96] Skipping create...Using existing machine configuration
	I0930 20:33:05.199784   44409 fix.go:54] fixHost starting: 
	I0930 20:33:05.200049   44409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:33:05.200087   44409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:33:05.215139   44409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44047
	I0930 20:33:05.215730   44409 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:33:05.216220   44409 main.go:141] libmachine: Using API Version  1
	I0930 20:33:05.216240   44409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:33:05.216565   44409 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:33:05.216733   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:33:05.216884   44409 main.go:141] libmachine: (multinode-103579) Calling .GetState
	I0930 20:33:05.218546   44409 fix.go:112] recreateIfNeeded on multinode-103579: state=Running err=<nil>
	W0930 20:33:05.218583   44409 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 20:33:05.220631   44409 out.go:177] * Updating the running kvm2 "multinode-103579" VM ...
	I0930 20:33:05.221885   44409 machine.go:93] provisionDockerMachine start ...
	I0930 20:33:05.221908   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:33:05.222155   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:33:05.224995   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.225535   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.225572   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.225703   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:33:05.225871   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.226006   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.226128   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:33:05.226254   44409 main.go:141] libmachine: Using SSH client type: native
	I0930 20:33:05.226477   44409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0930 20:33:05.226487   44409 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 20:33:05.344387   44409 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-103579
	
	I0930 20:33:05.344421   44409 main.go:141] libmachine: (multinode-103579) Calling .GetMachineName
	I0930 20:33:05.344688   44409 buildroot.go:166] provisioning hostname "multinode-103579"
	I0930 20:33:05.344716   44409 main.go:141] libmachine: (multinode-103579) Calling .GetMachineName
	I0930 20:33:05.344903   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:33:05.347576   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.347952   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.347978   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.348116   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:33:05.348288   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.348414   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.348527   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:33:05.348700   44409 main.go:141] libmachine: Using SSH client type: native
	I0930 20:33:05.348905   44409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0930 20:33:05.348922   44409 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-103579 && echo "multinode-103579" | sudo tee /etc/hostname
	I0930 20:33:05.480794   44409 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-103579
	
	I0930 20:33:05.480825   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:33:05.483629   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.484143   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.484186   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.484398   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:33:05.484598   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.484847   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.484985   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:33:05.485168   44409 main.go:141] libmachine: Using SSH client type: native
	I0930 20:33:05.485338   44409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0930 20:33:05.485354   44409 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-103579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-103579/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-103579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:33:05.600359   44409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:33:05.600394   44409 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:33:05.600446   44409 buildroot.go:174] setting up certificates
	I0930 20:33:05.600456   44409 provision.go:84] configureAuth start
	I0930 20:33:05.600466   44409 main.go:141] libmachine: (multinode-103579) Calling .GetMachineName
	I0930 20:33:05.600743   44409 main.go:141] libmachine: (multinode-103579) Calling .GetIP
	I0930 20:33:05.603593   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.603961   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.603988   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.604096   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:33:05.606462   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.606831   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.606856   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.606968   44409 provision.go:143] copyHostCerts
	I0930 20:33:05.606993   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:33:05.607033   44409 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:33:05.607044   44409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:33:05.607108   44409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:33:05.607213   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:33:05.607232   44409 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:33:05.607236   44409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:33:05.607259   44409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:33:05.607320   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:33:05.607337   44409 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:33:05.607341   44409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:33:05.607361   44409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:33:05.607418   44409 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.multinode-103579 san=[127.0.0.1 192.168.39.58 localhost minikube multinode-103579]
	I0930 20:33:05.765715   44409 provision.go:177] copyRemoteCerts
	I0930 20:33:05.765773   44409 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:33:05.765804   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:33:05.768448   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.768879   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.768913   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.769080   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:33:05.769278   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.769451   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:33:05.769594   44409 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/multinode-103579/id_rsa Username:docker}
	I0930 20:33:05.861441   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 20:33:05.861527   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:33:05.886209   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 20:33:05.886291   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0930 20:33:05.910367   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 20:33:05.910431   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 20:33:05.936743   44409 provision.go:87] duration metric: took 336.274216ms to configureAuth
	I0930 20:33:05.936780   44409 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:33:05.937031   44409 config.go:182] Loaded profile config "multinode-103579": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:33:05.937122   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:33:05.940214   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.940626   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:33:05.940658   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:33:05.940887   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:33:05.941086   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.941230   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:33:05.941482   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:33:05.941672   44409 main.go:141] libmachine: Using SSH client type: native
	I0930 20:33:05.941836   44409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0930 20:33:05.941851   44409 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:34:36.574929   44409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:34:36.574967   44409 machine.go:96] duration metric: took 1m31.353066244s to provisionDockerMachine
	I0930 20:34:36.574986   44409 start.go:293] postStartSetup for "multinode-103579" (driver="kvm2")
	I0930 20:34:36.574997   44409 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:34:36.575012   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:34:36.575411   44409 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:34:36.575443   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:34:36.578639   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.579053   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:34:36.579077   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.579252   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:34:36.579437   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:34:36.579655   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:34:36.579801   44409 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/multinode-103579/id_rsa Username:docker}
	I0930 20:34:36.667514   44409 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:34:36.671899   44409 command_runner.go:130] > NAME=Buildroot
	I0930 20:34:36.671924   44409 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0930 20:34:36.671929   44409 command_runner.go:130] > ID=buildroot
	I0930 20:34:36.671935   44409 command_runner.go:130] > VERSION_ID=2023.02.9
	I0930 20:34:36.671940   44409 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0930 20:34:36.671993   44409 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:34:36.672008   44409 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:34:36.672072   44409 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:34:36.672148   44409 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:34:36.672156   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /etc/ssl/certs/148752.pem
	I0930 20:34:36.672270   44409 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:34:36.681703   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:34:36.704771   44409 start.go:296] duration metric: took 129.768933ms for postStartSetup
	I0930 20:34:36.704824   44409 fix.go:56] duration metric: took 1m31.505040857s for fixHost
	I0930 20:34:36.704848   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:34:36.708051   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.708484   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:34:36.708523   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.708748   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:34:36.708940   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:34:36.709171   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:34:36.709385   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:34:36.709632   44409 main.go:141] libmachine: Using SSH client type: native
	I0930 20:34:36.709801   44409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0930 20:34:36.709812   44409 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:34:36.824369   44409 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727728476.800338586
	
	I0930 20:34:36.824400   44409 fix.go:216] guest clock: 1727728476.800338586
	I0930 20:34:36.824410   44409 fix.go:229] Guest: 2024-09-30 20:34:36.800338586 +0000 UTC Remote: 2024-09-30 20:34:36.704829654 +0000 UTC m=+91.637775823 (delta=95.508932ms)
	I0930 20:34:36.824479   44409 fix.go:200] guest clock delta is within tolerance: 95.508932ms
	I0930 20:34:36.824487   44409 start.go:83] releasing machines lock for "multinode-103579", held for 1m31.624722762s
	I0930 20:34:36.824517   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:34:36.824824   44409 main.go:141] libmachine: (multinode-103579) Calling .GetIP
	I0930 20:34:36.827320   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.827767   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:34:36.827797   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.827983   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:34:36.828568   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:34:36.828747   44409 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:34:36.828813   44409 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:34:36.828878   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:34:36.828970   44409 ssh_runner.go:195] Run: cat /version.json
	I0930 20:34:36.828987   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:34:36.831925   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.831951   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.832378   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:34:36.832429   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.832483   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:34:36.832516   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:36.832558   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:34:36.832712   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:34:36.832780   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:34:36.832873   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:34:36.832912   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:34:36.833006   44409 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:34:36.833077   44409 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/multinode-103579/id_rsa Username:docker}
	I0930 20:34:36.833242   44409 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/multinode-103579/id_rsa Username:docker}
	I0930 20:34:36.913257   44409 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0930 20:34:36.913482   44409 ssh_runner.go:195] Run: systemctl --version
	I0930 20:34:36.955679   44409 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0930 20:34:36.955770   44409 command_runner.go:130] > systemd 252 (252)
	I0930 20:34:36.955806   44409 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0930 20:34:36.955904   44409 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:34:37.123145   44409 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0930 20:34:37.129454   44409 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0930 20:34:37.129509   44409 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:34:37.129579   44409 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:34:37.140093   44409 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 20:34:37.140123   44409 start.go:495] detecting cgroup driver to use...
	I0930 20:34:37.140210   44409 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:34:37.158154   44409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:34:37.173190   44409 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:34:37.173259   44409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:34:37.188312   44409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:34:37.203120   44409 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:34:37.349033   44409 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:34:37.498594   44409 docker.go:233] disabling docker service ...
	I0930 20:34:37.498675   44409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:34:37.516391   44409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:34:37.530731   44409 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:34:37.676512   44409 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:34:37.817443   44409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:34:37.831261   44409 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:34:37.850195   44409 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0930 20:34:37.850741   44409 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:34:37.850810   44409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.861283   44409 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:34:37.861365   44409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.871704   44409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.882926   44409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.893404   44409 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:34:37.904336   44409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.914593   44409 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.926919   44409 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:34:37.937582   44409 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:34:37.947470   44409 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0930 20:34:37.947591   44409 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:34:37.957286   44409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:34:38.091891   44409 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:34:38.291918   44409 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:34:38.291990   44409 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:34:38.296787   44409 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0930 20:34:38.296810   44409 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0930 20:34:38.296816   44409 command_runner.go:130] > Device: 0,22	Inode: 1338        Links: 1
	I0930 20:34:38.296823   44409 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0930 20:34:38.296831   44409 command_runner.go:130] > Access: 2024-09-30 20:34:38.159831106 +0000
	I0930 20:34:38.296840   44409 command_runner.go:130] > Modify: 2024-09-30 20:34:38.159831106 +0000
	I0930 20:34:38.296848   44409 command_runner.go:130] > Change: 2024-09-30 20:34:38.159831106 +0000
	I0930 20:34:38.296852   44409 command_runner.go:130] >  Birth: -
	I0930 20:34:38.296881   44409 start.go:563] Will wait 60s for crictl version
	I0930 20:34:38.296931   44409 ssh_runner.go:195] Run: which crictl
	I0930 20:34:38.301146   44409 command_runner.go:130] > /usr/bin/crictl
	I0930 20:34:38.301226   44409 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:34:38.342412   44409 command_runner.go:130] > Version:  0.1.0
	I0930 20:34:38.342436   44409 command_runner.go:130] > RuntimeName:  cri-o
	I0930 20:34:38.342442   44409 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0930 20:34:38.342450   44409 command_runner.go:130] > RuntimeApiVersion:  v1
	I0930 20:34:38.342585   44409 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:34:38.342664   44409 ssh_runner.go:195] Run: crio --version
	I0930 20:34:38.371124   44409 command_runner.go:130] > crio version 1.29.1
	I0930 20:34:38.371153   44409 command_runner.go:130] > Version:        1.29.1
	I0930 20:34:38.371162   44409 command_runner.go:130] > GitCommit:      unknown
	I0930 20:34:38.371167   44409 command_runner.go:130] > GitCommitDate:  unknown
	I0930 20:34:38.371173   44409 command_runner.go:130] > GitTreeState:   clean
	I0930 20:34:38.371180   44409 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0930 20:34:38.371186   44409 command_runner.go:130] > GoVersion:      go1.21.6
	I0930 20:34:38.371191   44409 command_runner.go:130] > Compiler:       gc
	I0930 20:34:38.371198   44409 command_runner.go:130] > Platform:       linux/amd64
	I0930 20:34:38.371206   44409 command_runner.go:130] > Linkmode:       dynamic
	I0930 20:34:38.371215   44409 command_runner.go:130] > BuildTags:      
	I0930 20:34:38.371224   44409 command_runner.go:130] >   containers_image_ostree_stub
	I0930 20:34:38.371234   44409 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0930 20:34:38.371242   44409 command_runner.go:130] >   btrfs_noversion
	I0930 20:34:38.371251   44409 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0930 20:34:38.371260   44409 command_runner.go:130] >   libdm_no_deferred_remove
	I0930 20:34:38.371269   44409 command_runner.go:130] >   seccomp
	I0930 20:34:38.371283   44409 command_runner.go:130] > LDFlags:          unknown
	I0930 20:34:38.371338   44409 command_runner.go:130] > SeccompEnabled:   true
	I0930 20:34:38.371364   44409 command_runner.go:130] > AppArmorEnabled:  false
	I0930 20:34:38.371443   44409 ssh_runner.go:195] Run: crio --version
	I0930 20:34:38.400730   44409 command_runner.go:130] > crio version 1.29.1
	I0930 20:34:38.400751   44409 command_runner.go:130] > Version:        1.29.1
	I0930 20:34:38.400763   44409 command_runner.go:130] > GitCommit:      unknown
	I0930 20:34:38.400767   44409 command_runner.go:130] > GitCommitDate:  unknown
	I0930 20:34:38.400770   44409 command_runner.go:130] > GitTreeState:   clean
	I0930 20:34:38.400776   44409 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0930 20:34:38.400780   44409 command_runner.go:130] > GoVersion:      go1.21.6
	I0930 20:34:38.400784   44409 command_runner.go:130] > Compiler:       gc
	I0930 20:34:38.400788   44409 command_runner.go:130] > Platform:       linux/amd64
	I0930 20:34:38.400795   44409 command_runner.go:130] > Linkmode:       dynamic
	I0930 20:34:38.400799   44409 command_runner.go:130] > BuildTags:      
	I0930 20:34:38.400804   44409 command_runner.go:130] >   containers_image_ostree_stub
	I0930 20:34:38.400808   44409 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0930 20:34:38.400813   44409 command_runner.go:130] >   btrfs_noversion
	I0930 20:34:38.400820   44409 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0930 20:34:38.400830   44409 command_runner.go:130] >   libdm_no_deferred_remove
	I0930 20:34:38.400836   44409 command_runner.go:130] >   seccomp
	I0930 20:34:38.400847   44409 command_runner.go:130] > LDFlags:          unknown
	I0930 20:34:38.400854   44409 command_runner.go:130] > SeccompEnabled:   true
	I0930 20:34:38.400861   44409 command_runner.go:130] > AppArmorEnabled:  false
	I0930 20:34:38.403170   44409 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:34:38.404656   44409 main.go:141] libmachine: (multinode-103579) Calling .GetIP
	I0930 20:34:38.407302   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:38.407661   44409 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:34:38.407692   44409 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:34:38.407932   44409 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:34:38.412262   44409 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0930 20:34:38.412395   44409 kubeadm.go:883] updating cluster {Name:multinode-103579 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-103579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 20:34:38.412529   44409 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:34:38.412577   44409 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:34:38.454352   44409 command_runner.go:130] > {
	I0930 20:34:38.454376   44409 command_runner.go:130] >   "images": [
	I0930 20:34:38.454382   44409 command_runner.go:130] >     {
	I0930 20:34:38.454392   44409 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0930 20:34:38.454399   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.454407   44409 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0930 20:34:38.454414   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454419   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.454446   44409 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0930 20:34:38.454457   44409 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0930 20:34:38.454489   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454501   44409 command_runner.go:130] >       "size": "87190579",
	I0930 20:34:38.454507   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.454513   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.454525   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.454534   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.454553   44409 command_runner.go:130] >     },
	I0930 20:34:38.454562   44409 command_runner.go:130] >     {
	I0930 20:34:38.454571   44409 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0930 20:34:38.454578   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.454586   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0930 20:34:38.454595   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454604   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.454616   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0930 20:34:38.454632   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0930 20:34:38.454642   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454651   44409 command_runner.go:130] >       "size": "1363676",
	I0930 20:34:38.454660   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.454675   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.454684   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.454696   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.454704   44409 command_runner.go:130] >     },
	I0930 20:34:38.454709   44409 command_runner.go:130] >     {
	I0930 20:34:38.454720   44409 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0930 20:34:38.454729   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.454738   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0930 20:34:38.454746   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454755   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.454771   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0930 20:34:38.454788   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0930 20:34:38.454797   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454806   44409 command_runner.go:130] >       "size": "31470524",
	I0930 20:34:38.454814   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.454822   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.454830   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.454838   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.454847   44409 command_runner.go:130] >     },
	I0930 20:34:38.454854   44409 command_runner.go:130] >     {
	I0930 20:34:38.454867   44409 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0930 20:34:38.454875   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.454886   44409 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0930 20:34:38.454893   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454902   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.454919   44409 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0930 20:34:38.454938   44409 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0930 20:34:38.454946   44409 command_runner.go:130] >       ],
	I0930 20:34:38.454954   44409 command_runner.go:130] >       "size": "63273227",
	I0930 20:34:38.454963   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.454971   44409 command_runner.go:130] >       "username": "nonroot",
	I0930 20:34:38.454981   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.454990   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.454996   44409 command_runner.go:130] >     },
	I0930 20:34:38.455003   44409 command_runner.go:130] >     {
	I0930 20:34:38.455016   44409 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0930 20:34:38.455025   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.455034   44409 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0930 20:34:38.455042   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455050   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.455064   44409 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0930 20:34:38.455079   44409 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0930 20:34:38.455087   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455095   44409 command_runner.go:130] >       "size": "149009664",
	I0930 20:34:38.455104   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.455111   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.455119   44409 command_runner.go:130] >       },
	I0930 20:34:38.455126   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.455136   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.455145   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.455152   44409 command_runner.go:130] >     },
	I0930 20:34:38.455160   44409 command_runner.go:130] >     {
	I0930 20:34:38.455170   44409 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0930 20:34:38.455179   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.455189   44409 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0930 20:34:38.455197   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455204   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.455219   44409 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0930 20:34:38.455234   44409 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0930 20:34:38.455242   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455251   44409 command_runner.go:130] >       "size": "95237600",
	I0930 20:34:38.455261   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.455271   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.455277   44409 command_runner.go:130] >       },
	I0930 20:34:38.455287   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.455296   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.455303   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.455311   44409 command_runner.go:130] >     },
	I0930 20:34:38.455317   44409 command_runner.go:130] >     {
	I0930 20:34:38.455330   44409 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0930 20:34:38.455340   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.455351   44409 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0930 20:34:38.455360   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455367   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.455383   44409 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0930 20:34:38.455399   44409 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0930 20:34:38.455407   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455415   44409 command_runner.go:130] >       "size": "89437508",
	I0930 20:34:38.455424   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.455432   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.455440   44409 command_runner.go:130] >       },
	I0930 20:34:38.455447   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.455457   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.455464   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.455472   44409 command_runner.go:130] >     },
	I0930 20:34:38.455478   44409 command_runner.go:130] >     {
	I0930 20:34:38.455492   44409 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0930 20:34:38.455501   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.455511   44409 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0930 20:34:38.455520   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455544   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.455567   44409 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0930 20:34:38.455582   44409 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0930 20:34:38.455590   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455598   44409 command_runner.go:130] >       "size": "92733849",
	I0930 20:34:38.455609   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.455620   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.455628   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.455635   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.455639   44409 command_runner.go:130] >     },
	I0930 20:34:38.455645   44409 command_runner.go:130] >     {
	I0930 20:34:38.455654   44409 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0930 20:34:38.455662   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.455670   44409 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0930 20:34:38.455676   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455684   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.455699   44409 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0930 20:34:38.455714   44409 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0930 20:34:38.455722   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455731   44409 command_runner.go:130] >       "size": "68420934",
	I0930 20:34:38.455748   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.455759   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.455768   44409 command_runner.go:130] >       },
	I0930 20:34:38.455777   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.455785   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.455793   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.455802   44409 command_runner.go:130] >     },
	I0930 20:34:38.455809   44409 command_runner.go:130] >     {
	I0930 20:34:38.455822   44409 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0930 20:34:38.455832   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.455843   44409 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0930 20:34:38.455852   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455859   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.455874   44409 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0930 20:34:38.455889   44409 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0930 20:34:38.455898   44409 command_runner.go:130] >       ],
	I0930 20:34:38.455906   44409 command_runner.go:130] >       "size": "742080",
	I0930 20:34:38.455914   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.455923   44409 command_runner.go:130] >         "value": "65535"
	I0930 20:34:38.455932   44409 command_runner.go:130] >       },
	I0930 20:34:38.455941   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.455948   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.455957   44409 command_runner.go:130] >       "pinned": true
	I0930 20:34:38.455964   44409 command_runner.go:130] >     }
	I0930 20:34:38.455971   44409 command_runner.go:130] >   ]
	I0930 20:34:38.455978   44409 command_runner.go:130] > }
	I0930 20:34:38.456158   44409 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:34:38.456171   44409 crio.go:433] Images already preloaded, skipping extraction
	I0930 20:34:38.456238   44409 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:34:38.489704   44409 command_runner.go:130] > {
	I0930 20:34:38.489734   44409 command_runner.go:130] >   "images": [
	I0930 20:34:38.489740   44409 command_runner.go:130] >     {
	I0930 20:34:38.489752   44409 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0930 20:34:38.489763   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.489773   44409 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0930 20:34:38.489779   44409 command_runner.go:130] >       ],
	I0930 20:34:38.489785   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.489798   44409 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0930 20:34:38.489808   44409 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0930 20:34:38.489814   44409 command_runner.go:130] >       ],
	I0930 20:34:38.489822   44409 command_runner.go:130] >       "size": "87190579",
	I0930 20:34:38.489829   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.489838   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.489848   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.489858   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.489863   44409 command_runner.go:130] >     },
	I0930 20:34:38.489866   44409 command_runner.go:130] >     {
	I0930 20:34:38.489879   44409 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0930 20:34:38.489883   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.489890   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0930 20:34:38.489896   44409 command_runner.go:130] >       ],
	I0930 20:34:38.489900   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.489907   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0930 20:34:38.489923   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0930 20:34:38.489927   44409 command_runner.go:130] >       ],
	I0930 20:34:38.489931   44409 command_runner.go:130] >       "size": "1363676",
	I0930 20:34:38.489935   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.489941   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.489947   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.489951   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.489956   44409 command_runner.go:130] >     },
	I0930 20:34:38.489960   44409 command_runner.go:130] >     {
	I0930 20:34:38.489967   44409 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0930 20:34:38.489974   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.489979   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0930 20:34:38.489986   44409 command_runner.go:130] >       ],
	I0930 20:34:38.489990   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490000   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0930 20:34:38.490010   44409 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0930 20:34:38.490016   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490020   44409 command_runner.go:130] >       "size": "31470524",
	I0930 20:34:38.490026   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.490029   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490034   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490040   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490044   44409 command_runner.go:130] >     },
	I0930 20:34:38.490048   44409 command_runner.go:130] >     {
	I0930 20:34:38.490054   44409 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0930 20:34:38.490059   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490065   44409 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0930 20:34:38.490070   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490074   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490083   44409 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0930 20:34:38.490096   44409 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0930 20:34:38.490101   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490105   44409 command_runner.go:130] >       "size": "63273227",
	I0930 20:34:38.490112   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.490116   44409 command_runner.go:130] >       "username": "nonroot",
	I0930 20:34:38.490122   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490126   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490131   44409 command_runner.go:130] >     },
	I0930 20:34:38.490135   44409 command_runner.go:130] >     {
	I0930 20:34:38.490143   44409 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0930 20:34:38.490150   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490155   44409 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0930 20:34:38.490160   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490164   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490173   44409 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0930 20:34:38.490181   44409 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0930 20:34:38.490187   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490191   44409 command_runner.go:130] >       "size": "149009664",
	I0930 20:34:38.490197   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.490201   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.490206   44409 command_runner.go:130] >       },
	I0930 20:34:38.490210   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490216   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490220   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490223   44409 command_runner.go:130] >     },
	I0930 20:34:38.490227   44409 command_runner.go:130] >     {
	I0930 20:34:38.490235   44409 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0930 20:34:38.490241   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490246   44409 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0930 20:34:38.490249   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490252   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490291   44409 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0930 20:34:38.490298   44409 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0930 20:34:38.490301   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490307   44409 command_runner.go:130] >       "size": "95237600",
	I0930 20:34:38.490311   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.490316   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.490319   44409 command_runner.go:130] >       },
	I0930 20:34:38.490323   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490330   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490334   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490340   44409 command_runner.go:130] >     },
	I0930 20:34:38.490343   44409 command_runner.go:130] >     {
	I0930 20:34:38.490351   44409 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0930 20:34:38.490358   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490363   44409 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0930 20:34:38.490369   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490373   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490383   44409 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0930 20:34:38.490393   44409 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0930 20:34:38.490398   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490402   44409 command_runner.go:130] >       "size": "89437508",
	I0930 20:34:38.490406   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.490411   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.490415   44409 command_runner.go:130] >       },
	I0930 20:34:38.490420   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490425   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490430   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490434   44409 command_runner.go:130] >     },
	I0930 20:34:38.490439   44409 command_runner.go:130] >     {
	I0930 20:34:38.490445   44409 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0930 20:34:38.490451   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490455   44409 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0930 20:34:38.490461   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490465   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490480   44409 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0930 20:34:38.490490   44409 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0930 20:34:38.490494   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490498   44409 command_runner.go:130] >       "size": "92733849",
	I0930 20:34:38.490503   44409 command_runner.go:130] >       "uid": null,
	I0930 20:34:38.490508   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490515   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490519   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490524   44409 command_runner.go:130] >     },
	I0930 20:34:38.490527   44409 command_runner.go:130] >     {
	I0930 20:34:38.490535   44409 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0930 20:34:38.490540   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490544   44409 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0930 20:34:38.490551   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490555   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490564   44409 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0930 20:34:38.490578   44409 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0930 20:34:38.490586   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490595   44409 command_runner.go:130] >       "size": "68420934",
	I0930 20:34:38.490601   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.490607   44409 command_runner.go:130] >         "value": "0"
	I0930 20:34:38.490617   44409 command_runner.go:130] >       },
	I0930 20:34:38.490623   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490629   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490635   44409 command_runner.go:130] >       "pinned": false
	I0930 20:34:38.490640   44409 command_runner.go:130] >     },
	I0930 20:34:38.490644   44409 command_runner.go:130] >     {
	I0930 20:34:38.490653   44409 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0930 20:34:38.490657   44409 command_runner.go:130] >       "repoTags": [
	I0930 20:34:38.490662   44409 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0930 20:34:38.490669   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490675   44409 command_runner.go:130] >       "repoDigests": [
	I0930 20:34:38.490682   44409 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0930 20:34:38.490692   44409 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0930 20:34:38.490698   44409 command_runner.go:130] >       ],
	I0930 20:34:38.490702   44409 command_runner.go:130] >       "size": "742080",
	I0930 20:34:38.490708   44409 command_runner.go:130] >       "uid": {
	I0930 20:34:38.490712   44409 command_runner.go:130] >         "value": "65535"
	I0930 20:34:38.490717   44409 command_runner.go:130] >       },
	I0930 20:34:38.490722   44409 command_runner.go:130] >       "username": "",
	I0930 20:34:38.490727   44409 command_runner.go:130] >       "spec": null,
	I0930 20:34:38.490732   44409 command_runner.go:130] >       "pinned": true
	I0930 20:34:38.490737   44409 command_runner.go:130] >     }
	I0930 20:34:38.490740   44409 command_runner.go:130] >   ]
	I0930 20:34:38.490744   44409 command_runner.go:130] > }
	I0930 20:34:38.490860   44409 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:34:38.490870   44409 cache_images.go:84] Images are preloaded, skipping loading
	I0930 20:34:38.490879   44409 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.31.1 crio true true} ...
	I0930 20:34:38.491001   44409 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-103579 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-103579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:34:38.491064   44409 ssh_runner.go:195] Run: crio config
	I0930 20:34:38.532237   44409 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0930 20:34:38.532267   44409 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0930 20:34:38.532277   44409 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0930 20:34:38.532282   44409 command_runner.go:130] > #
	I0930 20:34:38.532309   44409 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0930 20:34:38.532319   44409 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0930 20:34:38.532328   44409 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0930 20:34:38.532336   44409 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0930 20:34:38.532341   44409 command_runner.go:130] > # reload'.
	I0930 20:34:38.532349   44409 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0930 20:34:38.532359   44409 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0930 20:34:38.532369   44409 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0930 20:34:38.532382   44409 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0930 20:34:38.532388   44409 command_runner.go:130] > [crio]
	I0930 20:34:38.532399   44409 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0930 20:34:38.532412   44409 command_runner.go:130] > # containers images, in this directory.
	I0930 20:34:38.532423   44409 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0930 20:34:38.532439   44409 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0930 20:34:38.532451   44409 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0930 20:34:38.532465   44409 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0930 20:34:38.532698   44409 command_runner.go:130] > # imagestore = ""
	I0930 20:34:38.532722   44409 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0930 20:34:38.532732   44409 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0930 20:34:38.532871   44409 command_runner.go:130] > storage_driver = "overlay"
	I0930 20:34:38.532891   44409 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0930 20:34:38.532901   44409 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0930 20:34:38.532908   44409 command_runner.go:130] > storage_option = [
	I0930 20:34:38.533047   44409 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0930 20:34:38.533062   44409 command_runner.go:130] > ]
	I0930 20:34:38.533073   44409 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0930 20:34:38.533082   44409 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0930 20:34:38.533292   44409 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0930 20:34:38.533307   44409 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0930 20:34:38.533319   44409 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0930 20:34:38.533328   44409 command_runner.go:130] > # always happen on a node reboot
	I0930 20:34:38.533612   44409 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0930 20:34:38.533635   44409 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0930 20:34:38.533645   44409 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0930 20:34:38.533655   44409 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0930 20:34:38.533817   44409 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0930 20:34:38.533837   44409 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0930 20:34:38.533851   44409 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0930 20:34:38.534016   44409 command_runner.go:130] > # internal_wipe = true
	I0930 20:34:38.534034   44409 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0930 20:34:38.534044   44409 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0930 20:34:38.534486   44409 command_runner.go:130] > # internal_repair = false
	I0930 20:34:38.534503   44409 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0930 20:34:38.534514   44409 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0930 20:34:38.534526   44409 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0930 20:34:38.534704   44409 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0930 20:34:38.534719   44409 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0930 20:34:38.534725   44409 command_runner.go:130] > [crio.api]
	I0930 20:34:38.534733   44409 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0930 20:34:38.534915   44409 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0930 20:34:38.534936   44409 command_runner.go:130] > # IP address on which the stream server will listen.
	I0930 20:34:38.535152   44409 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0930 20:34:38.535169   44409 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0930 20:34:38.535178   44409 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0930 20:34:38.535373   44409 command_runner.go:130] > # stream_port = "0"
	I0930 20:34:38.535385   44409 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0930 20:34:38.535658   44409 command_runner.go:130] > # stream_enable_tls = false
	I0930 20:34:38.535674   44409 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0930 20:34:38.535864   44409 command_runner.go:130] > # stream_idle_timeout = ""
	I0930 20:34:38.535880   44409 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0930 20:34:38.535890   44409 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0930 20:34:38.535899   44409 command_runner.go:130] > # minutes.
	I0930 20:34:38.536060   44409 command_runner.go:130] > # stream_tls_cert = ""
	I0930 20:34:38.536074   44409 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0930 20:34:38.536080   44409 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0930 20:34:38.536380   44409 command_runner.go:130] > # stream_tls_key = ""
	I0930 20:34:38.536398   44409 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0930 20:34:38.536409   44409 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0930 20:34:38.536427   44409 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0930 20:34:38.536615   44409 command_runner.go:130] > # stream_tls_ca = ""
	I0930 20:34:38.536627   44409 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0930 20:34:38.537111   44409 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0930 20:34:38.537131   44409 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0930 20:34:38.537140   44409 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0930 20:34:38.537151   44409 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0930 20:34:38.537162   44409 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0930 20:34:38.537168   44409 command_runner.go:130] > [crio.runtime]
	I0930 20:34:38.537179   44409 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0930 20:34:38.537189   44409 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0930 20:34:38.537197   44409 command_runner.go:130] > # "nofile=1024:2048"
	I0930 20:34:38.537206   44409 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0930 20:34:38.537215   44409 command_runner.go:130] > # default_ulimits = [
	I0930 20:34:38.537219   44409 command_runner.go:130] > # ]
	I0930 20:34:38.537229   44409 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0930 20:34:38.537241   44409 command_runner.go:130] > # no_pivot = false
	I0930 20:34:38.537249   44409 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0930 20:34:38.537261   44409 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0930 20:34:38.537271   44409 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0930 20:34:38.537280   44409 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0930 20:34:38.537290   44409 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0930 20:34:38.537308   44409 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0930 20:34:38.537319   44409 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0930 20:34:38.537326   44409 command_runner.go:130] > # Cgroup setting for conmon
	I0930 20:34:38.537339   44409 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0930 20:34:38.537349   44409 command_runner.go:130] > conmon_cgroup = "pod"
	I0930 20:34:38.537358   44409 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0930 20:34:38.537368   44409 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0930 20:34:38.537382   44409 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0930 20:34:38.537391   44409 command_runner.go:130] > conmon_env = [
	I0930 20:34:38.537405   44409 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0930 20:34:38.537414   44409 command_runner.go:130] > ]
	I0930 20:34:38.537425   44409 command_runner.go:130] > # Additional environment variables to set for all the
	I0930 20:34:38.537437   44409 command_runner.go:130] > # containers. These are overridden if set in the
	I0930 20:34:38.537449   44409 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0930 20:34:38.537456   44409 command_runner.go:130] > # default_env = [
	I0930 20:34:38.537465   44409 command_runner.go:130] > # ]
	I0930 20:34:38.537477   44409 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0930 20:34:38.537491   44409 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0930 20:34:38.537503   44409 command_runner.go:130] > # selinux = false
	I0930 20:34:38.537514   44409 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0930 20:34:38.537527   44409 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0930 20:34:38.537536   44409 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0930 20:34:38.537545   44409 command_runner.go:130] > # seccomp_profile = ""
	I0930 20:34:38.537554   44409 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0930 20:34:38.537567   44409 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0930 20:34:38.537579   44409 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0930 20:34:38.537589   44409 command_runner.go:130] > # which might increase security.
	I0930 20:34:38.537599   44409 command_runner.go:130] > # This option is currently deprecated,
	I0930 20:34:38.537608   44409 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0930 20:34:38.537618   44409 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0930 20:34:38.537628   44409 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0930 20:34:38.537641   44409 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0930 20:34:38.537654   44409 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0930 20:34:38.537668   44409 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0930 20:34:38.537679   44409 command_runner.go:130] > # This option supports live configuration reload.
	I0930 20:34:38.537686   44409 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0930 20:34:38.537698   44409 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0930 20:34:38.537706   44409 command_runner.go:130] > # the cgroup blockio controller.
	I0930 20:34:38.537713   44409 command_runner.go:130] > # blockio_config_file = ""
	I0930 20:34:38.537727   44409 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0930 20:34:38.537736   44409 command_runner.go:130] > # blockio parameters.
	I0930 20:34:38.537743   44409 command_runner.go:130] > # blockio_reload = false
	I0930 20:34:38.537757   44409 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0930 20:34:38.537767   44409 command_runner.go:130] > # irqbalance daemon.
	I0930 20:34:38.537776   44409 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0930 20:34:38.537789   44409 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0930 20:34:38.537805   44409 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0930 20:34:38.537816   44409 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0930 20:34:38.537831   44409 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0930 20:34:38.537844   44409 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0930 20:34:38.537854   44409 command_runner.go:130] > # This option supports live configuration reload.
	I0930 20:34:38.537866   44409 command_runner.go:130] > # rdt_config_file = ""
	I0930 20:34:38.537876   44409 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0930 20:34:38.537885   44409 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0930 20:34:38.537906   44409 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0930 20:34:38.537916   44409 command_runner.go:130] > # separate_pull_cgroup = ""
	I0930 20:34:38.537926   44409 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0930 20:34:38.537940   44409 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0930 20:34:38.537949   44409 command_runner.go:130] > # will be added.
	I0930 20:34:38.537955   44409 command_runner.go:130] > # default_capabilities = [
	I0930 20:34:38.537962   44409 command_runner.go:130] > # 	"CHOWN",
	I0930 20:34:38.537971   44409 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0930 20:34:38.537977   44409 command_runner.go:130] > # 	"FSETID",
	I0930 20:34:38.537986   44409 command_runner.go:130] > # 	"FOWNER",
	I0930 20:34:38.537994   44409 command_runner.go:130] > # 	"SETGID",
	I0930 20:34:38.538005   44409 command_runner.go:130] > # 	"SETUID",
	I0930 20:34:38.538016   44409 command_runner.go:130] > # 	"SETPCAP",
	I0930 20:34:38.538025   44409 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0930 20:34:38.538030   44409 command_runner.go:130] > # 	"KILL",
	I0930 20:34:38.538039   44409 command_runner.go:130] > # ]
	I0930 20:34:38.538051   44409 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0930 20:34:38.538065   44409 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0930 20:34:38.538075   44409 command_runner.go:130] > # add_inheritable_capabilities = false
	I0930 20:34:38.538089   44409 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0930 20:34:38.538103   44409 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0930 20:34:38.538112   44409 command_runner.go:130] > default_sysctls = [
	I0930 20:34:38.538120   44409 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0930 20:34:38.538128   44409 command_runner.go:130] > ]
	I0930 20:34:38.538136   44409 command_runner.go:130] > # List of devices on the host that a
	I0930 20:34:38.538149   44409 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0930 20:34:38.538158   44409 command_runner.go:130] > # allowed_devices = [
	I0930 20:34:38.538165   44409 command_runner.go:130] > # 	"/dev/fuse",
	I0930 20:34:38.538172   44409 command_runner.go:130] > # ]
	I0930 20:34:38.538181   44409 command_runner.go:130] > # List of additional devices. specified as
	I0930 20:34:38.538197   44409 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0930 20:34:38.538208   44409 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0930 20:34:38.538220   44409 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0930 20:34:38.538229   44409 command_runner.go:130] > # additional_devices = [
	I0930 20:34:38.538234   44409 command_runner.go:130] > # ]
	I0930 20:34:38.538247   44409 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0930 20:34:38.538256   44409 command_runner.go:130] > # cdi_spec_dirs = [
	I0930 20:34:38.538260   44409 command_runner.go:130] > # 	"/etc/cdi",
	I0930 20:34:38.538265   44409 command_runner.go:130] > # 	"/var/run/cdi",
	I0930 20:34:38.538271   44409 command_runner.go:130] > # ]
	I0930 20:34:38.538282   44409 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0930 20:34:38.538297   44409 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0930 20:34:38.538311   44409 command_runner.go:130] > # Defaults to false.
	I0930 20:34:38.538321   44409 command_runner.go:130] > # device_ownership_from_security_context = false
	I0930 20:34:38.538331   44409 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0930 20:34:38.538341   44409 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0930 20:34:38.538350   44409 command_runner.go:130] > # hooks_dir = [
	I0930 20:34:38.538358   44409 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0930 20:34:38.538366   44409 command_runner.go:130] > # ]
	I0930 20:34:38.538375   44409 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0930 20:34:38.538389   44409 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0930 20:34:38.538400   44409 command_runner.go:130] > # its default mounts from the following two files:
	I0930 20:34:38.538408   44409 command_runner.go:130] > #
	I0930 20:34:38.538418   44409 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0930 20:34:38.538431   44409 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0930 20:34:38.538444   44409 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0930 20:34:38.538453   44409 command_runner.go:130] > #
	I0930 20:34:38.538463   44409 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0930 20:34:38.538476   44409 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0930 20:34:38.538489   44409 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0930 20:34:38.538501   44409 command_runner.go:130] > #      only add mounts it finds in this file.
	I0930 20:34:38.538509   44409 command_runner.go:130] > #
	I0930 20:34:38.538516   44409 command_runner.go:130] > # default_mounts_file = ""
	I0930 20:34:38.538527   44409 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0930 20:34:38.538542   44409 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0930 20:34:38.538552   44409 command_runner.go:130] > pids_limit = 1024
	I0930 20:34:38.538562   44409 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0930 20:34:38.538573   44409 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0930 20:34:38.538586   44409 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0930 20:34:38.538600   44409 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0930 20:34:38.538613   44409 command_runner.go:130] > # log_size_max = -1
	I0930 20:34:38.538627   44409 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0930 20:34:38.538636   44409 command_runner.go:130] > # log_to_journald = false
	I0930 20:34:38.538645   44409 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0930 20:34:38.538656   44409 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0930 20:34:38.538667   44409 command_runner.go:130] > # Path to directory for container attach sockets.
	I0930 20:34:38.538677   44409 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0930 20:34:38.538686   44409 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0930 20:34:38.538697   44409 command_runner.go:130] > # bind_mount_prefix = ""
	I0930 20:34:38.538708   44409 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0930 20:34:38.538720   44409 command_runner.go:130] > # read_only = false
	I0930 20:34:38.538730   44409 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0930 20:34:38.538742   44409 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0930 20:34:38.538753   44409 command_runner.go:130] > # live configuration reload.
	I0930 20:34:38.538760   44409 command_runner.go:130] > # log_level = "info"
	I0930 20:34:38.538771   44409 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0930 20:34:38.538782   44409 command_runner.go:130] > # This option supports live configuration reload.
	I0930 20:34:38.538791   44409 command_runner.go:130] > # log_filter = ""
	I0930 20:34:38.538802   44409 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0930 20:34:38.538814   44409 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0930 20:34:38.538822   44409 command_runner.go:130] > # separated by comma.
	I0930 20:34:38.538836   44409 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 20:34:38.538855   44409 command_runner.go:130] > # uid_mappings = ""
	I0930 20:34:38.538869   44409 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0930 20:34:38.538881   44409 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0930 20:34:38.538888   44409 command_runner.go:130] > # separated by comma.
	I0930 20:34:38.538905   44409 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 20:34:38.538911   44409 command_runner.go:130] > # gid_mappings = ""
	I0930 20:34:38.538921   44409 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0930 20:34:38.538930   44409 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0930 20:34:38.538943   44409 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0930 20:34:38.538956   44409 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 20:34:38.538964   44409 command_runner.go:130] > # minimum_mappable_uid = -1
	I0930 20:34:38.538974   44409 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0930 20:34:38.538984   44409 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0930 20:34:38.538997   44409 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0930 20:34:38.539009   44409 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 20:34:38.539018   44409 command_runner.go:130] > # minimum_mappable_gid = -1
	I0930 20:34:38.539027   44409 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0930 20:34:38.539038   44409 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0930 20:34:38.539049   44409 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0930 20:34:38.539055   44409 command_runner.go:130] > # ctr_stop_timeout = 30
	I0930 20:34:38.539067   44409 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0930 20:34:38.539079   44409 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0930 20:34:38.539090   44409 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0930 20:34:38.539101   44409 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0930 20:34:38.539110   44409 command_runner.go:130] > drop_infra_ctr = false
	I0930 20:34:38.539120   44409 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0930 20:34:38.539135   44409 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0930 20:34:38.539149   44409 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0930 20:34:38.539159   44409 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0930 20:34:38.539169   44409 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0930 20:34:38.539181   44409 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0930 20:34:38.539191   44409 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0930 20:34:38.539204   44409 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0930 20:34:38.539213   44409 command_runner.go:130] > # shared_cpuset = ""
	I0930 20:34:38.539223   44409 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0930 20:34:38.539233   44409 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0930 20:34:38.539243   44409 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0930 20:34:38.539257   44409 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0930 20:34:38.539263   44409 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0930 20:34:38.539272   44409 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0930 20:34:38.539286   44409 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0930 20:34:38.539297   44409 command_runner.go:130] > # enable_criu_support = false
	I0930 20:34:38.539314   44409 command_runner.go:130] > # Enable/disable the generation of the container,
	I0930 20:34:38.539327   44409 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0930 20:34:38.539337   44409 command_runner.go:130] > # enable_pod_events = false
	I0930 20:34:38.539346   44409 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0930 20:34:38.539358   44409 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0930 20:34:38.539367   44409 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0930 20:34:38.539374   44409 command_runner.go:130] > # default_runtime = "runc"
	I0930 20:34:38.539386   44409 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0930 20:34:38.539401   44409 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0930 20:34:38.539419   44409 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0930 20:34:38.539431   44409 command_runner.go:130] > # creation as a file is not desired either.
	I0930 20:34:38.539447   44409 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0930 20:34:38.539459   44409 command_runner.go:130] > # the hostname is being managed dynamically.
	I0930 20:34:38.539468   44409 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0930 20:34:38.539474   44409 command_runner.go:130] > # ]
	I0930 20:34:38.539486   44409 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0930 20:34:38.539500   44409 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0930 20:34:38.539511   44409 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0930 20:34:38.539522   44409 command_runner.go:130] > # Each entry in the table should follow the format:
	I0930 20:34:38.539548   44409 command_runner.go:130] > #
	I0930 20:34:38.539559   44409 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0930 20:34:38.539567   44409 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0930 20:34:38.539590   44409 command_runner.go:130] > # runtime_type = "oci"
	I0930 20:34:38.539601   44409 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0930 20:34:38.539611   44409 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0930 20:34:38.539618   44409 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0930 20:34:38.539628   44409 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0930 20:34:38.539635   44409 command_runner.go:130] > # monitor_env = []
	I0930 20:34:38.539646   44409 command_runner.go:130] > # privileged_without_host_devices = false
	I0930 20:34:38.539657   44409 command_runner.go:130] > # allowed_annotations = []
	I0930 20:34:38.539669   44409 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0930 20:34:38.539678   44409 command_runner.go:130] > # Where:
	I0930 20:34:38.539690   44409 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0930 20:34:38.539702   44409 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0930 20:34:38.539716   44409 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0930 20:34:38.539728   44409 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0930 20:34:38.539737   44409 command_runner.go:130] > #   in $PATH.
	I0930 20:34:38.539747   44409 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0930 20:34:38.539758   44409 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0930 20:34:38.539768   44409 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0930 20:34:38.539777   44409 command_runner.go:130] > #   state.
	I0930 20:34:38.539787   44409 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0930 20:34:38.539800   44409 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0930 20:34:38.539810   44409 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0930 20:34:38.539819   44409 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0930 20:34:38.539830   44409 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0930 20:34:38.539840   44409 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0930 20:34:38.539848   44409 command_runner.go:130] > #   The currently recognized values are:
	I0930 20:34:38.539860   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0930 20:34:38.539873   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0930 20:34:38.539885   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0930 20:34:38.539895   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0930 20:34:38.539904   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0930 20:34:38.539919   44409 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0930 20:34:38.539932   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0930 20:34:38.539945   44409 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0930 20:34:38.539956   44409 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0930 20:34:38.539970   44409 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0930 20:34:38.539978   44409 command_runner.go:130] > #   deprecated option "conmon".
	I0930 20:34:38.539989   44409 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0930 20:34:38.540001   44409 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0930 20:34:38.540013   44409 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0930 20:34:38.540024   44409 command_runner.go:130] > #   should be moved to the container's cgroup
	I0930 20:34:38.540036   44409 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0930 20:34:38.540048   44409 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0930 20:34:38.540058   44409 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0930 20:34:38.540066   44409 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0930 20:34:38.540070   44409 command_runner.go:130] > #
	I0930 20:34:38.540084   44409 command_runner.go:130] > # Using the seccomp notifier feature:
	I0930 20:34:38.540094   44409 command_runner.go:130] > #
	I0930 20:34:38.540102   44409 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0930 20:34:38.540115   44409 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0930 20:34:38.540123   44409 command_runner.go:130] > #
	I0930 20:34:38.540131   44409 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0930 20:34:38.540144   44409 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0930 20:34:38.540149   44409 command_runner.go:130] > #
	I0930 20:34:38.540165   44409 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0930 20:34:38.540173   44409 command_runner.go:130] > # feature.
	I0930 20:34:38.540179   44409 command_runner.go:130] > #
	I0930 20:34:38.540190   44409 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0930 20:34:38.540203   44409 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0930 20:34:38.540214   44409 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0930 20:34:38.540223   44409 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0930 20:34:38.540229   44409 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0930 20:34:38.540233   44409 command_runner.go:130] > #
	I0930 20:34:38.540239   44409 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0930 20:34:38.540245   44409 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0930 20:34:38.540250   44409 command_runner.go:130] > #
	I0930 20:34:38.540255   44409 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0930 20:34:38.540262   44409 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0930 20:34:38.540266   44409 command_runner.go:130] > #
	I0930 20:34:38.540272   44409 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0930 20:34:38.540281   44409 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0930 20:34:38.540287   44409 command_runner.go:130] > # limitation.
	I0930 20:34:38.540299   44409 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0930 20:34:38.540311   44409 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0930 20:34:38.540315   44409 command_runner.go:130] > runtime_type = "oci"
	I0930 20:34:38.540319   44409 command_runner.go:130] > runtime_root = "/run/runc"
	I0930 20:34:38.540323   44409 command_runner.go:130] > runtime_config_path = ""
	I0930 20:34:38.540328   44409 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0930 20:34:38.540332   44409 command_runner.go:130] > monitor_cgroup = "pod"
	I0930 20:34:38.540336   44409 command_runner.go:130] > monitor_exec_cgroup = ""
	I0930 20:34:38.540340   44409 command_runner.go:130] > monitor_env = [
	I0930 20:34:38.540345   44409 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0930 20:34:38.540351   44409 command_runner.go:130] > ]
	I0930 20:34:38.540355   44409 command_runner.go:130] > privileged_without_host_devices = false
	I0930 20:34:38.540362   44409 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0930 20:34:38.540368   44409 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0930 20:34:38.540375   44409 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0930 20:34:38.540384   44409 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0930 20:34:38.540391   44409 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0930 20:34:38.540398   44409 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0930 20:34:38.540407   44409 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0930 20:34:38.540416   44409 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0930 20:34:38.540421   44409 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0930 20:34:38.540428   44409 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0930 20:34:38.540433   44409 command_runner.go:130] > # Example:
	I0930 20:34:38.540438   44409 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0930 20:34:38.540444   44409 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0930 20:34:38.540449   44409 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0930 20:34:38.540455   44409 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0930 20:34:38.540459   44409 command_runner.go:130] > # cpuset = 0
	I0930 20:34:38.540465   44409 command_runner.go:130] > # cpushares = "0-1"
	I0930 20:34:38.540468   44409 command_runner.go:130] > # Where:
	I0930 20:34:38.540473   44409 command_runner.go:130] > # The workload name is workload-type.
	I0930 20:34:38.540481   44409 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0930 20:34:38.540486   44409 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0930 20:34:38.540494   44409 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0930 20:34:38.540501   44409 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0930 20:34:38.540508   44409 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0930 20:34:38.540513   44409 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0930 20:34:38.540521   44409 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0930 20:34:38.540525   44409 command_runner.go:130] > # Default value is set to true
	I0930 20:34:38.540530   44409 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0930 20:34:38.540535   44409 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0930 20:34:38.540539   44409 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0930 20:34:38.540545   44409 command_runner.go:130] > # Default value is set to 'false'
	I0930 20:34:38.540549   44409 command_runner.go:130] > # disable_hostport_mapping = false
	I0930 20:34:38.540559   44409 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0930 20:34:38.540561   44409 command_runner.go:130] > #
	I0930 20:34:38.540567   44409 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0930 20:34:38.540578   44409 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0930 20:34:38.540587   44409 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0930 20:34:38.540599   44409 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0930 20:34:38.540608   44409 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0930 20:34:38.540613   44409 command_runner.go:130] > [crio.image]
	I0930 20:34:38.540622   44409 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0930 20:34:38.540628   44409 command_runner.go:130] > # default_transport = "docker://"
	I0930 20:34:38.540637   44409 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0930 20:34:38.540646   44409 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0930 20:34:38.540652   44409 command_runner.go:130] > # global_auth_file = ""
	I0930 20:34:38.540660   44409 command_runner.go:130] > # The image used to instantiate infra containers.
	I0930 20:34:38.540665   44409 command_runner.go:130] > # This option supports live configuration reload.
	I0930 20:34:38.540670   44409 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0930 20:34:38.540676   44409 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0930 20:34:38.540681   44409 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0930 20:34:38.540686   44409 command_runner.go:130] > # This option supports live configuration reload.
	I0930 20:34:38.540693   44409 command_runner.go:130] > # pause_image_auth_file = ""
	I0930 20:34:38.540699   44409 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0930 20:34:38.540707   44409 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0930 20:34:38.540714   44409 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0930 20:34:38.540722   44409 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0930 20:34:38.540726   44409 command_runner.go:130] > # pause_command = "/pause"
	I0930 20:34:38.540731   44409 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0930 20:34:38.540739   44409 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0930 20:34:38.540744   44409 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0930 20:34:38.540750   44409 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0930 20:34:38.540755   44409 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0930 20:34:38.540762   44409 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0930 20:34:38.540766   44409 command_runner.go:130] > # pinned_images = [
	I0930 20:34:38.540769   44409 command_runner.go:130] > # ]
	I0930 20:34:38.540775   44409 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0930 20:34:38.540783   44409 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0930 20:34:38.540789   44409 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0930 20:34:38.540795   44409 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0930 20:34:38.540801   44409 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0930 20:34:38.540807   44409 command_runner.go:130] > # signature_policy = ""
	I0930 20:34:38.540812   44409 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0930 20:34:38.540818   44409 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0930 20:34:38.540826   44409 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0930 20:34:38.540832   44409 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0930 20:34:38.540839   44409 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0930 20:34:38.540844   44409 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0930 20:34:38.540850   44409 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0930 20:34:38.540856   44409 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0930 20:34:38.540862   44409 command_runner.go:130] > # changing them here.
	I0930 20:34:38.540866   44409 command_runner.go:130] > # insecure_registries = [
	I0930 20:34:38.540869   44409 command_runner.go:130] > # ]
	I0930 20:34:38.540874   44409 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0930 20:34:38.540881   44409 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0930 20:34:38.540885   44409 command_runner.go:130] > # image_volumes = "mkdir"
	I0930 20:34:38.540891   44409 command_runner.go:130] > # Temporary directory to use for storing big files
	I0930 20:34:38.540896   44409 command_runner.go:130] > # big_files_temporary_dir = ""
	I0930 20:34:38.540902   44409 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0930 20:34:38.540907   44409 command_runner.go:130] > # CNI plugins.
	I0930 20:34:38.540911   44409 command_runner.go:130] > [crio.network]
	I0930 20:34:38.540919   44409 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0930 20:34:38.540924   44409 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0930 20:34:38.540930   44409 command_runner.go:130] > # cni_default_network = ""
	I0930 20:34:38.540935   44409 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0930 20:34:38.540941   44409 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0930 20:34:38.540947   44409 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0930 20:34:38.540952   44409 command_runner.go:130] > # plugin_dirs = [
	I0930 20:34:38.540956   44409 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0930 20:34:38.540959   44409 command_runner.go:130] > # ]
	I0930 20:34:38.540965   44409 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0930 20:34:38.540969   44409 command_runner.go:130] > [crio.metrics]
	I0930 20:34:38.540973   44409 command_runner.go:130] > # Globally enable or disable metrics support.
	I0930 20:34:38.540979   44409 command_runner.go:130] > enable_metrics = true
	I0930 20:34:38.540984   44409 command_runner.go:130] > # Specify enabled metrics collectors.
	I0930 20:34:38.540988   44409 command_runner.go:130] > # Per default all metrics are enabled.
	I0930 20:34:38.540995   44409 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0930 20:34:38.541000   44409 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0930 20:34:38.541008   44409 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0930 20:34:38.541012   44409 command_runner.go:130] > # metrics_collectors = [
	I0930 20:34:38.541018   44409 command_runner.go:130] > # 	"operations",
	I0930 20:34:38.541023   44409 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0930 20:34:38.541027   44409 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0930 20:34:38.541031   44409 command_runner.go:130] > # 	"operations_errors",
	I0930 20:34:38.541035   44409 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0930 20:34:38.541039   44409 command_runner.go:130] > # 	"image_pulls_by_name",
	I0930 20:34:38.541043   44409 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0930 20:34:38.541049   44409 command_runner.go:130] > # 	"image_pulls_failures",
	I0930 20:34:38.541053   44409 command_runner.go:130] > # 	"image_pulls_successes",
	I0930 20:34:38.541059   44409 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0930 20:34:38.541063   44409 command_runner.go:130] > # 	"image_layer_reuse",
	I0930 20:34:38.541070   44409 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0930 20:34:38.541074   44409 command_runner.go:130] > # 	"containers_oom_total",
	I0930 20:34:38.541080   44409 command_runner.go:130] > # 	"containers_oom",
	I0930 20:34:38.541084   44409 command_runner.go:130] > # 	"processes_defunct",
	I0930 20:34:38.541087   44409 command_runner.go:130] > # 	"operations_total",
	I0930 20:34:38.541091   44409 command_runner.go:130] > # 	"operations_latency_seconds",
	I0930 20:34:38.541098   44409 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0930 20:34:38.541102   44409 command_runner.go:130] > # 	"operations_errors_total",
	I0930 20:34:38.541108   44409 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0930 20:34:38.541112   44409 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0930 20:34:38.541116   44409 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0930 20:34:38.541122   44409 command_runner.go:130] > # 	"image_pulls_success_total",
	I0930 20:34:38.541126   44409 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0930 20:34:38.541130   44409 command_runner.go:130] > # 	"containers_oom_count_total",
	I0930 20:34:38.541136   44409 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0930 20:34:38.541142   44409 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0930 20:34:38.541145   44409 command_runner.go:130] > # ]
	I0930 20:34:38.541149   44409 command_runner.go:130] > # The port on which the metrics server will listen.
	I0930 20:34:38.541155   44409 command_runner.go:130] > # metrics_port = 9090
	I0930 20:34:38.541160   44409 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0930 20:34:38.541166   44409 command_runner.go:130] > # metrics_socket = ""
	I0930 20:34:38.541171   44409 command_runner.go:130] > # The certificate for the secure metrics server.
	I0930 20:34:38.541176   44409 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0930 20:34:38.541185   44409 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0930 20:34:38.541189   44409 command_runner.go:130] > # certificate on any modification event.
	I0930 20:34:38.541197   44409 command_runner.go:130] > # metrics_cert = ""
	I0930 20:34:38.541205   44409 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0930 20:34:38.541215   44409 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0930 20:34:38.541223   44409 command_runner.go:130] > # metrics_key = ""
	I0930 20:34:38.541233   44409 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0930 20:34:38.541241   44409 command_runner.go:130] > [crio.tracing]
	I0930 20:34:38.541248   44409 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0930 20:34:38.541254   44409 command_runner.go:130] > # enable_tracing = false
	I0930 20:34:38.541262   44409 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0930 20:34:38.541268   44409 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0930 20:34:38.541281   44409 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0930 20:34:38.541289   44409 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0930 20:34:38.541298   44409 command_runner.go:130] > # CRI-O NRI configuration.
	I0930 20:34:38.541309   44409 command_runner.go:130] > [crio.nri]
	I0930 20:34:38.541319   44409 command_runner.go:130] > # Globally enable or disable NRI.
	I0930 20:34:38.541324   44409 command_runner.go:130] > # enable_nri = false
	I0930 20:34:38.541332   44409 command_runner.go:130] > # NRI socket to listen on.
	I0930 20:34:38.541342   44409 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0930 20:34:38.541349   44409 command_runner.go:130] > # NRI plugin directory to use.
	I0930 20:34:38.541359   44409 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0930 20:34:38.541366   44409 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0930 20:34:38.541374   44409 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0930 20:34:38.541380   44409 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0930 20:34:38.541387   44409 command_runner.go:130] > # nri_disable_connections = false
	I0930 20:34:38.541394   44409 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0930 20:34:38.541399   44409 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0930 20:34:38.541405   44409 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0930 20:34:38.541409   44409 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0930 20:34:38.541417   44409 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0930 20:34:38.541421   44409 command_runner.go:130] > [crio.stats]
	I0930 20:34:38.541429   44409 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0930 20:34:38.541434   44409 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0930 20:34:38.541438   44409 command_runner.go:130] > # stats_collection_period = 0
	I0930 20:34:38.541458   44409 command_runner.go:130] ! time="2024-09-30 20:34:38.499885576Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0930 20:34:38.541480   44409 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0930 20:34:38.541550   44409 cni.go:84] Creating CNI manager for ""
	I0930 20:34:38.541563   44409 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0930 20:34:38.541573   44409 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 20:34:38.541607   44409 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-103579 NodeName:multinode-103579 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 20:34:38.541724   44409 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-103579"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 20:34:38.541779   44409 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:34:38.551912   44409 command_runner.go:130] > kubeadm
	I0930 20:34:38.551939   44409 command_runner.go:130] > kubectl
	I0930 20:34:38.551945   44409 command_runner.go:130] > kubelet
	I0930 20:34:38.551959   44409 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 20:34:38.552017   44409 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 20:34:38.561170   44409 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0930 20:34:38.577026   44409 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:34:38.592696   44409 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0930 20:34:38.609641   44409 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0930 20:34:38.613982   44409 command_runner.go:130] > 192.168.39.58	control-plane.minikube.internal
	I0930 20:34:38.614073   44409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:34:38.771880   44409 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:34:38.786032   44409 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579 for IP: 192.168.39.58
	I0930 20:34:38.786057   44409 certs.go:194] generating shared ca certs ...
	I0930 20:34:38.786084   44409 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:34:38.786253   44409 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:34:38.786311   44409 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:34:38.786335   44409 certs.go:256] generating profile certs ...
	I0930 20:34:38.786443   44409 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/client.key
	I0930 20:34:38.786526   44409 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/apiserver.key.bac6694b
	I0930 20:34:38.786579   44409 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/proxy-client.key
	I0930 20:34:38.786592   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 20:34:38.786611   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 20:34:38.786630   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 20:34:38.786649   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 20:34:38.786668   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 20:34:38.786688   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 20:34:38.786706   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 20:34:38.786726   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 20:34:38.786794   44409 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:34:38.786834   44409 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:34:38.786848   44409 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:34:38.786884   44409 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:34:38.786917   44409 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:34:38.786947   44409 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:34:38.787000   44409 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:34:38.787037   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:34:38.787056   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem -> /usr/share/ca-certificates/14875.pem
	I0930 20:34:38.787075   44409 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> /usr/share/ca-certificates/148752.pem
	I0930 20:34:38.787662   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:34:38.811486   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:34:38.835550   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:34:38.860012   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:34:38.883593   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 20:34:38.906912   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 20:34:38.931844   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:34:38.957905   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/multinode-103579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 20:34:38.983030   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:34:39.007696   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:34:39.031307   44409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:34:39.056001   44409 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 20:34:39.072665   44409 ssh_runner.go:195] Run: openssl version
	I0930 20:34:39.078085   44409 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0930 20:34:39.078174   44409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:34:39.088950   44409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:34:39.093222   44409 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:34:39.093271   44409 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:34:39.093322   44409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:34:39.098358   44409 command_runner.go:130] > b5213941
	I0930 20:34:39.098507   44409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:34:39.108201   44409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:34:39.118829   44409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:34:39.123386   44409 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:34:39.123427   44409 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:34:39.123469   44409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:34:39.129330   44409 command_runner.go:130] > 51391683
	I0930 20:34:39.129414   44409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:34:39.140325   44409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:34:39.151307   44409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:34:39.155593   44409 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:34:39.155623   44409 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:34:39.155680   44409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:34:39.161510   44409 command_runner.go:130] > 3ec20f2e
	I0930 20:34:39.161583   44409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:34:39.171797   44409 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:34:39.176058   44409 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:34:39.176084   44409 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0930 20:34:39.176090   44409 command_runner.go:130] > Device: 253,1	Inode: 9431080     Links: 1
	I0930 20:34:39.176096   44409 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0930 20:34:39.176104   44409 command_runner.go:130] > Access: 2024-09-30 20:27:28.130937113 +0000
	I0930 20:34:39.176110   44409 command_runner.go:130] > Modify: 2024-09-30 20:27:28.130937113 +0000
	I0930 20:34:39.176114   44409 command_runner.go:130] > Change: 2024-09-30 20:27:28.130937113 +0000
	I0930 20:34:39.176119   44409 command_runner.go:130] >  Birth: 2024-09-30 20:27:28.130937113 +0000
	I0930 20:34:39.176169   44409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 20:34:39.181615   44409 command_runner.go:130] > Certificate will not expire
	I0930 20:34:39.181690   44409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 20:34:39.187194   44409 command_runner.go:130] > Certificate will not expire
	I0930 20:34:39.187255   44409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 20:34:39.192946   44409 command_runner.go:130] > Certificate will not expire
	I0930 20:34:39.193018   44409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 20:34:39.198291   44409 command_runner.go:130] > Certificate will not expire
	I0930 20:34:39.198463   44409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 20:34:39.203823   44409 command_runner.go:130] > Certificate will not expire
	I0930 20:34:39.203891   44409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 20:34:39.209906   44409 command_runner.go:130] > Certificate will not expire
	I0930 20:34:39.209985   44409 kubeadm.go:392] StartCluster: {Name:multinode-103579 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-103579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.237 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:34:39.210077   44409 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 20:34:39.210126   44409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 20:34:39.244119   44409 command_runner.go:130] > 4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec
	I0930 20:34:39.244143   44409 command_runner.go:130] > 39eb244acec3c9751b53fffd3102949734163c8b9530270bb170ba702e1cd2fe
	I0930 20:34:39.244149   44409 command_runner.go:130] > 0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78
	I0930 20:34:39.244196   44409 command_runner.go:130] > cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed
	I0930 20:34:39.244210   44409 command_runner.go:130] > bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711
	I0930 20:34:39.244216   44409 command_runner.go:130] > 80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863
	I0930 20:34:39.244227   44409 command_runner.go:130] > 25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306
	I0930 20:34:39.244239   44409 command_runner.go:130] > 9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785
	I0930 20:34:39.245623   44409 cri.go:89] found id: "4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec"
	I0930 20:34:39.245640   44409 cri.go:89] found id: "39eb244acec3c9751b53fffd3102949734163c8b9530270bb170ba702e1cd2fe"
	I0930 20:34:39.245644   44409 cri.go:89] found id: "0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78"
	I0930 20:34:39.245647   44409 cri.go:89] found id: "cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed"
	I0930 20:34:39.245650   44409 cri.go:89] found id: "bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711"
	I0930 20:34:39.245654   44409 cri.go:89] found id: "80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863"
	I0930 20:34:39.245656   44409 cri.go:89] found id: "25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306"
	I0930 20:34:39.245658   44409 cri.go:89] found id: "9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785"
	I0930 20:34:39.245661   44409 cri.go:89] found id: ""
	I0930 20:34:39.245698   44409 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.714876807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728733714857595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2dd909a-3bb9-43ac-a63e-4b8f3a9fb2f1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.715514340Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f519c5b4-4de2-4012-a45d-865ef0e935cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.715572485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f519c5b4-4de2-4012-a45d-865ef0e935cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.715912246Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1efa514a1839d6c19de59ead297fc8b01dbadad2701663bd1b23f5cb33f2e4a4,PodSandboxId:8aa99d529ad115c8600e78e936d7a72a8f0044f6204d747ffb725fbd407fc1cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727728519230342275,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb6c1c361bf4cf527748bfa59bca94dacb6779c506eef5330be08ee680de5d8,PodSandboxId:1d60728a0dd9a384a5e9b0539847da880ce3bd226fbfb430d5a1bad13a6ca1ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727728485660684306,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d109a5da6112f48b12c4fdce7ca5328f2254fa60babfc88676f3a279e018ecd,PodSandboxId:9745c762550dee5bdc872905de937d5225bb9f73af060f4645cc1b7b016bc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727728485589953216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b216cdc18ef72b2e8c0cde275f96b74f5a451fea3294520dcc3a5ee59c0b93,PodSandboxId:a8475f15ba470aec7327301e7f6b72c090f1fc07ffacbdff3a5a2c583fa0ea22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727728485522511041,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f005915b39da2769fc1c0c889208feb792bc405352af7cc3ae08e902e9fc4b0f,PodSandboxId:c32494dab655a08e2019c7ae5bbd41be6cb978f826ef6766ed7c1b2c067d2810,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727728485478810965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4131c58d0bf44d1303dc4391ae014e69c758eba279b4be21c3f4a473bed9d5,PodSandboxId:3fcfde39b4a7efadc1251b3c40db99526e55c3007985a79cd9bc64f406f2085f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727728481691841019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e58cad6d23e0be49e31a60ca54dad76f241fe59124086b531b42b93dd18e8a,PodSandboxId:f93877fe64e8a0dbdacdeb08bf787c2d860f3a670234700133485c883dd7af5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727728481637377689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6186b2a7a37ce6735a472a1591ff4137e2c1299aae5d9317852e7dfa79aaacd9,PodSandboxId:3c1fe4014918e36cef377a708aa2633c728055202691cb4cfa8e87648aae124f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727728481638178727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c696ac5ff6ecfcd8642f495e8c1946c568c1bebf2360280e1d4acc5ceaaba2,PodSandboxId:18859e9937fe831b85b17e2394a060f964a14ad0419fed4e876d4912fa2d5ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727728481600637174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a33a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9805d52edb30c2b44b5f802f59587a99803e805f55ba70004b3ecabc38c7e9ce,PodSandboxId:89ca8dd277b3eec6b63261217716c6254700f2c8b5102a207f0bcb793367f623,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727728163557050694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec,PodSandboxId:09a4b750bc3f4c0d15e716def52649a1bb78d034a4db3e3d688120e83b858eb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727728105122787470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39eb244acec3c9751b53fffd3102949734163c8b9530270bb170ba702e1cd2fe,PodSandboxId:806660b0a1105f7fba7e1a10685769a1d90a398c41c8ad27cf891984ea5483b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727728105043395217,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78,PodSandboxId:c9f44ae0002ed37f7487b40a31647dc28c91f7ad2bedc6e541977592f8268116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727728064631404452,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed,PodSandboxId:7fb3cdf08702f148cabaaa0e309eb8184575c942f237de3c77d8cc53c4aeb668,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727728063431806710,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20
-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711,PodSandboxId:43fd4ce185d2e1a7a4c956fb10f9f06536d1f77f8c1f5d943ac72029d955ea54,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727728052374672168,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883
015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863,PodSandboxId:1913067266c997e68460587d0a1b1ea75ba0718e2c43734fe37c0fdf75a04e38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727728052344829276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a3
3a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306,PodSandboxId:8a2c2a7613b9b99f8d9a3a4b39dd1232192dd6e9a19e9a82afa1e1290e42ce85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727728052292522260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785,PodSandboxId:d5cbd01102f7f062277ee18f1089f1a3ab960c046e1a57cbbf7451945964b141,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727728052245407035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f519c5b4-4de2-4012-a45d-865ef0e935cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.760330323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=65b98302-5721-4a1e-abb0-fd4c413278d8 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.760426180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=65b98302-5721-4a1e-abb0-fd4c413278d8 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.761770556Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=758d4644-eb55-4aa0-b2ab-6eeb10dd9a9a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.762348898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728733762324615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=758d4644-eb55-4aa0-b2ab-6eeb10dd9a9a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.763151953Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=416ad38e-43d9-4c15-8c0b-f2d78697ca11 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.763230641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=416ad38e-43d9-4c15-8c0b-f2d78697ca11 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.763736730Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1efa514a1839d6c19de59ead297fc8b01dbadad2701663bd1b23f5cb33f2e4a4,PodSandboxId:8aa99d529ad115c8600e78e936d7a72a8f0044f6204d747ffb725fbd407fc1cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727728519230342275,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb6c1c361bf4cf527748bfa59bca94dacb6779c506eef5330be08ee680de5d8,PodSandboxId:1d60728a0dd9a384a5e9b0539847da880ce3bd226fbfb430d5a1bad13a6ca1ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727728485660684306,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d109a5da6112f48b12c4fdce7ca5328f2254fa60babfc88676f3a279e018ecd,PodSandboxId:9745c762550dee5bdc872905de937d5225bb9f73af060f4645cc1b7b016bc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727728485589953216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b216cdc18ef72b2e8c0cde275f96b74f5a451fea3294520dcc3a5ee59c0b93,PodSandboxId:a8475f15ba470aec7327301e7f6b72c090f1fc07ffacbdff3a5a2c583fa0ea22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727728485522511041,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f005915b39da2769fc1c0c889208feb792bc405352af7cc3ae08e902e9fc4b0f,PodSandboxId:c32494dab655a08e2019c7ae5bbd41be6cb978f826ef6766ed7c1b2c067d2810,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727728485478810965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4131c58d0bf44d1303dc4391ae014e69c758eba279b4be21c3f4a473bed9d5,PodSandboxId:3fcfde39b4a7efadc1251b3c40db99526e55c3007985a79cd9bc64f406f2085f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727728481691841019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e58cad6d23e0be49e31a60ca54dad76f241fe59124086b531b42b93dd18e8a,PodSandboxId:f93877fe64e8a0dbdacdeb08bf787c2d860f3a670234700133485c883dd7af5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727728481637377689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6186b2a7a37ce6735a472a1591ff4137e2c1299aae5d9317852e7dfa79aaacd9,PodSandboxId:3c1fe4014918e36cef377a708aa2633c728055202691cb4cfa8e87648aae124f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727728481638178727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c696ac5ff6ecfcd8642f495e8c1946c568c1bebf2360280e1d4acc5ceaaba2,PodSandboxId:18859e9937fe831b85b17e2394a060f964a14ad0419fed4e876d4912fa2d5ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727728481600637174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a33a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9805d52edb30c2b44b5f802f59587a99803e805f55ba70004b3ecabc38c7e9ce,PodSandboxId:89ca8dd277b3eec6b63261217716c6254700f2c8b5102a207f0bcb793367f623,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727728163557050694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec,PodSandboxId:09a4b750bc3f4c0d15e716def52649a1bb78d034a4db3e3d688120e83b858eb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727728105122787470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39eb244acec3c9751b53fffd3102949734163c8b9530270bb170ba702e1cd2fe,PodSandboxId:806660b0a1105f7fba7e1a10685769a1d90a398c41c8ad27cf891984ea5483b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727728105043395217,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78,PodSandboxId:c9f44ae0002ed37f7487b40a31647dc28c91f7ad2bedc6e541977592f8268116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727728064631404452,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed,PodSandboxId:7fb3cdf08702f148cabaaa0e309eb8184575c942f237de3c77d8cc53c4aeb668,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727728063431806710,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20
-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711,PodSandboxId:43fd4ce185d2e1a7a4c956fb10f9f06536d1f77f8c1f5d943ac72029d955ea54,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727728052374672168,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883
015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863,PodSandboxId:1913067266c997e68460587d0a1b1ea75ba0718e2c43734fe37c0fdf75a04e38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727728052344829276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a3
3a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306,PodSandboxId:8a2c2a7613b9b99f8d9a3a4b39dd1232192dd6e9a19e9a82afa1e1290e42ce85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727728052292522260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785,PodSandboxId:d5cbd01102f7f062277ee18f1089f1a3ab960c046e1a57cbbf7451945964b141,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727728052245407035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=416ad38e-43d9-4c15-8c0b-f2d78697ca11 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.802817463Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5e3325f-a955-4e24-828f-6d043dfd5cfb name=/runtime.v1.RuntimeService/Version
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.802911298Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5e3325f-a955-4e24-828f-6d043dfd5cfb name=/runtime.v1.RuntimeService/Version
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.804219750Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c383172c-5879-4804-ae77-27d50750a2c9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.804615895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728733804594301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c383172c-5879-4804-ae77-27d50750a2c9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.805125048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=360df8f1-af66-4133-9831-37d78788a241 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.805193733Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=360df8f1-af66-4133-9831-37d78788a241 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.805536865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1efa514a1839d6c19de59ead297fc8b01dbadad2701663bd1b23f5cb33f2e4a4,PodSandboxId:8aa99d529ad115c8600e78e936d7a72a8f0044f6204d747ffb725fbd407fc1cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727728519230342275,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb6c1c361bf4cf527748bfa59bca94dacb6779c506eef5330be08ee680de5d8,PodSandboxId:1d60728a0dd9a384a5e9b0539847da880ce3bd226fbfb430d5a1bad13a6ca1ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727728485660684306,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d109a5da6112f48b12c4fdce7ca5328f2254fa60babfc88676f3a279e018ecd,PodSandboxId:9745c762550dee5bdc872905de937d5225bb9f73af060f4645cc1b7b016bc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727728485589953216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b216cdc18ef72b2e8c0cde275f96b74f5a451fea3294520dcc3a5ee59c0b93,PodSandboxId:a8475f15ba470aec7327301e7f6b72c090f1fc07ffacbdff3a5a2c583fa0ea22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727728485522511041,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f005915b39da2769fc1c0c889208feb792bc405352af7cc3ae08e902e9fc4b0f,PodSandboxId:c32494dab655a08e2019c7ae5bbd41be6cb978f826ef6766ed7c1b2c067d2810,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727728485478810965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4131c58d0bf44d1303dc4391ae014e69c758eba279b4be21c3f4a473bed9d5,PodSandboxId:3fcfde39b4a7efadc1251b3c40db99526e55c3007985a79cd9bc64f406f2085f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727728481691841019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e58cad6d23e0be49e31a60ca54dad76f241fe59124086b531b42b93dd18e8a,PodSandboxId:f93877fe64e8a0dbdacdeb08bf787c2d860f3a670234700133485c883dd7af5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727728481637377689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6186b2a7a37ce6735a472a1591ff4137e2c1299aae5d9317852e7dfa79aaacd9,PodSandboxId:3c1fe4014918e36cef377a708aa2633c728055202691cb4cfa8e87648aae124f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727728481638178727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c696ac5ff6ecfcd8642f495e8c1946c568c1bebf2360280e1d4acc5ceaaba2,PodSandboxId:18859e9937fe831b85b17e2394a060f964a14ad0419fed4e876d4912fa2d5ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727728481600637174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a33a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9805d52edb30c2b44b5f802f59587a99803e805f55ba70004b3ecabc38c7e9ce,PodSandboxId:89ca8dd277b3eec6b63261217716c6254700f2c8b5102a207f0bcb793367f623,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727728163557050694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec,PodSandboxId:09a4b750bc3f4c0d15e716def52649a1bb78d034a4db3e3d688120e83b858eb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727728105122787470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39eb244acec3c9751b53fffd3102949734163c8b9530270bb170ba702e1cd2fe,PodSandboxId:806660b0a1105f7fba7e1a10685769a1d90a398c41c8ad27cf891984ea5483b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727728105043395217,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78,PodSandboxId:c9f44ae0002ed37f7487b40a31647dc28c91f7ad2bedc6e541977592f8268116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727728064631404452,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed,PodSandboxId:7fb3cdf08702f148cabaaa0e309eb8184575c942f237de3c77d8cc53c4aeb668,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727728063431806710,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20
-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711,PodSandboxId:43fd4ce185d2e1a7a4c956fb10f9f06536d1f77f8c1f5d943ac72029d955ea54,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727728052374672168,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883
015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863,PodSandboxId:1913067266c997e68460587d0a1b1ea75ba0718e2c43734fe37c0fdf75a04e38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727728052344829276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a3
3a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306,PodSandboxId:8a2c2a7613b9b99f8d9a3a4b39dd1232192dd6e9a19e9a82afa1e1290e42ce85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727728052292522260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785,PodSandboxId:d5cbd01102f7f062277ee18f1089f1a3ab960c046e1a57cbbf7451945964b141,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727728052245407035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=360df8f1-af66-4133-9831-37d78788a241 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.844784462Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1b2befe-367e-42b8-b85d-0a711b447a04 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.844925456Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1b2befe-367e-42b8-b85d-0a711b447a04 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.846047898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc6cf225-8797-4e31-8ea3-cce003fbb2bc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.846428517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728733846405842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc6cf225-8797-4e31-8ea3-cce003fbb2bc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.847031351Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5be10559-e1a5-406e-8b8e-317df3689b1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.847085129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5be10559-e1a5-406e-8b8e-317df3689b1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:38:53 multinode-103579 crio[2732]: time="2024-09-30 20:38:53.847418852Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1efa514a1839d6c19de59ead297fc8b01dbadad2701663bd1b23f5cb33f2e4a4,PodSandboxId:8aa99d529ad115c8600e78e936d7a72a8f0044f6204d747ffb725fbd407fc1cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727728519230342275,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb6c1c361bf4cf527748bfa59bca94dacb6779c506eef5330be08ee680de5d8,PodSandboxId:1d60728a0dd9a384a5e9b0539847da880ce3bd226fbfb430d5a1bad13a6ca1ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727728485660684306,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d109a5da6112f48b12c4fdce7ca5328f2254fa60babfc88676f3a279e018ecd,PodSandboxId:9745c762550dee5bdc872905de937d5225bb9f73af060f4645cc1b7b016bc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727728485589953216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b216cdc18ef72b2e8c0cde275f96b74f5a451fea3294520dcc3a5ee59c0b93,PodSandboxId:a8475f15ba470aec7327301e7f6b72c090f1fc07ffacbdff3a5a2c583fa0ea22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727728485522511041,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f005915b39da2769fc1c0c889208feb792bc405352af7cc3ae08e902e9fc4b0f,PodSandboxId:c32494dab655a08e2019c7ae5bbd41be6cb978f826ef6766ed7c1b2c067d2810,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727728485478810965,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4131c58d0bf44d1303dc4391ae014e69c758eba279b4be21c3f4a473bed9d5,PodSandboxId:3fcfde39b4a7efadc1251b3c40db99526e55c3007985a79cd9bc64f406f2085f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727728481691841019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e58cad6d23e0be49e31a60ca54dad76f241fe59124086b531b42b93dd18e8a,PodSandboxId:f93877fe64e8a0dbdacdeb08bf787c2d860f3a670234700133485c883dd7af5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727728481637377689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6186b2a7a37ce6735a472a1591ff4137e2c1299aae5d9317852e7dfa79aaacd9,PodSandboxId:3c1fe4014918e36cef377a708aa2633c728055202691cb4cfa8e87648aae124f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727728481638178727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c696ac5ff6ecfcd8642f495e8c1946c568c1bebf2360280e1d4acc5ceaaba2,PodSandboxId:18859e9937fe831b85b17e2394a060f964a14ad0419fed4e876d4912fa2d5ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727728481600637174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a33a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9805d52edb30c2b44b5f802f59587a99803e805f55ba70004b3ecabc38c7e9ce,PodSandboxId:89ca8dd277b3eec6b63261217716c6254700f2c8b5102a207f0bcb793367f623,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727728163557050694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxgwt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb156b23-97bc-4a08-b803-83d0793ed594,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec,PodSandboxId:09a4b750bc3f4c0d15e716def52649a1bb78d034a4db3e3d688120e83b858eb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727728105122787470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w95cn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdcac5d3-bdc6-45e9-b76a-8535bedc2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39eb244acec3c9751b53fffd3102949734163c8b9530270bb170ba702e1cd2fe,PodSandboxId:806660b0a1105f7fba7e1a10685769a1d90a398c41c8ad27cf891984ea5483b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727728105043395217,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e99637d1-a2fe-4459-b589-8f5743eae68b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78,PodSandboxId:c9f44ae0002ed37f7487b40a31647dc28c91f7ad2bedc6e541977592f8268116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727728064631404452,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dlpd,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a6d77742-c2e1-4613-bb50-3e73821120e6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed,PodSandboxId:7fb3cdf08702f148cabaaa0e309eb8184575c942f237de3c77d8cc53c4aeb668,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727728063431806710,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4m4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9251-f158-4fdd-bc20
-d1aac8981add,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711,PodSandboxId:43fd4ce185d2e1a7a4c956fb10f9f06536d1f77f8c1f5d943ac72029d955ea54,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727728052374672168,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a14883
015d1188405ff52843d0214c8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863,PodSandboxId:1913067266c997e68460587d0a1b1ea75ba0718e2c43734fe37c0fdf75a04e38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727728052344829276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9201a946a47fbfe2d322a3
3a89ecce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306,PodSandboxId:8a2c2a7613b9b99f8d9a3a4b39dd1232192dd6e9a19e9a82afa1e1290e42ce85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727728052292522260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e0d32d0df713dd227cff0d41ac7dc6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785,PodSandboxId:d5cbd01102f7f062277ee18f1089f1a3ab960c046e1a57cbbf7451945964b141,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727728052245407035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-103579,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5782d24096fc43d20beab353275b85d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5be10559-e1a5-406e-8b8e-317df3689b1c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1efa514a1839d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   8aa99d529ad11       busybox-7dff88458-vxgwt
	bbb6c1c361bf4       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   1d60728a0dd9a       kindnet-4m4kb
	2d109a5da6112       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   9745c762550de       coredns-7c65d6cfc9-w95cn
	49b216cdc18ef       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   a8475f15ba470       kube-proxy-9dlpd
	f005915b39da2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   c32494dab655a       storage-provisioner
	4a4131c58d0bf       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   3fcfde39b4a7e       kube-scheduler-multinode-103579
	6186b2a7a37ce       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   3c1fe4014918e       etcd-multinode-103579
	52e58cad6d23e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   f93877fe64e8a       kube-controller-manager-multinode-103579
	d8c696ac5ff6e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   18859e9937fe8       kube-apiserver-multinode-103579
	9805d52edb30c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   89ca8dd277b3e       busybox-7dff88458-vxgwt
	4bc848d9234a8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   09a4b750bc3f4       coredns-7c65d6cfc9-w95cn
	39eb244acec3c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   806660b0a1105       storage-provisioner
	0974451661f07       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      11 minutes ago      Exited              kube-proxy                0                   c9f44ae0002ed       kube-proxy-9dlpd
	cacfb622468b0       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      11 minutes ago      Exited              kindnet-cni               0                   7fb3cdf08702f       kindnet-4m4kb
	bc4433f691239       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      11 minutes ago      Exited              kube-controller-manager   0                   43fd4ce185d2e       kube-controller-manager-multinode-103579
	80432178b988b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      11 minutes ago      Exited              kube-apiserver            0                   1913067266c99       kube-apiserver-multinode-103579
	25b434fd4ab00       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      11 minutes ago      Exited              etcd                      0                   8a2c2a7613b9b       etcd-multinode-103579
	9596d6363e892       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      11 minutes ago      Exited              kube-scheduler            0                   d5cbd01102f7f       kube-scheduler-multinode-103579
	
	
	==> coredns [2d109a5da6112f48b12c4fdce7ca5328f2254fa60babfc88676f3a279e018ecd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42217 - 24013 "HINFO IN 4147206565910182645.8023849442997298152. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012859188s
	
	
	==> coredns [4bc848d9234a844eaab9fc26b48d7f60ed55e609530c8756b11f1819637f2bec] <==
	[INFO] 10.244.0.3:56430 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001972052s
	[INFO] 10.244.0.3:54196 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061891s
	[INFO] 10.244.0.3:44650 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000040748s
	[INFO] 10.244.0.3:46731 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001515008s
	[INFO] 10.244.0.3:33663 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088627s
	[INFO] 10.244.0.3:49750 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000041423s
	[INFO] 10.244.0.3:39612 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000038332s
	[INFO] 10.244.1.2:59098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143481s
	[INFO] 10.244.1.2:56880 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082918s
	[INFO] 10.244.1.2:49241 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066136s
	[INFO] 10.244.1.2:46960 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064178s
	[INFO] 10.244.0.3:56075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123559s
	[INFO] 10.244.0.3:37605 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009211s
	[INFO] 10.244.0.3:45177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079246s
	[INFO] 10.244.0.3:47750 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071686s
	[INFO] 10.244.1.2:51863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144718s
	[INFO] 10.244.1.2:34553 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000607405s
	[INFO] 10.244.1.2:60118 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159285s
	[INFO] 10.244.1.2:57388 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00017183s
	[INFO] 10.244.0.3:57017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166069s
	[INFO] 10.244.0.3:36642 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001969s
	[INFO] 10.244.0.3:33680 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007497s
	[INFO] 10.244.0.3:39556 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103466s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-103579
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-103579
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=multinode-103579
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T20_27_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:27:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-103579
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:38:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:34:45 +0000   Mon, 30 Sep 2024 20:27:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:34:45 +0000   Mon, 30 Sep 2024 20:27:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:34:45 +0000   Mon, 30 Sep 2024 20:27:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:34:45 +0000   Mon, 30 Sep 2024 20:28:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    multinode-103579
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 16f98965064b4362bf3244a75d525e39
	  System UUID:                16f98965-064b-4362-bf32-44a75d525e39
	  Boot ID:                    0c7bbc54-a3ed-4fe0-9039-f207a716caf8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vxgwt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	  kube-system                 coredns-7c65d6cfc9-w95cn                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     11m
	  kube-system                 etcd-multinode-103579                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-4m4kb                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-multinode-103579             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-multinode-103579    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9dlpd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-multinode-103579             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node multinode-103579 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node multinode-103579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node multinode-103579 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                    node-controller  Node multinode-103579 event: Registered Node multinode-103579 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-103579 status is now: NodeReady
	  Normal  Starting                 4m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m14s)  kubelet          Node multinode-103579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m14s)  kubelet          Node multinode-103579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m14s)  kubelet          Node multinode-103579 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                   node-controller  Node multinode-103579 event: Registered Node multinode-103579 in Controller
	
	
	Name:               multinode-103579-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-103579-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=multinode-103579
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T20_35_27_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:35:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-103579-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:36:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 30 Sep 2024 20:35:57 +0000   Mon, 30 Sep 2024 20:37:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 30 Sep 2024 20:35:57 +0000   Mon, 30 Sep 2024 20:37:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 30 Sep 2024 20:35:57 +0000   Mon, 30 Sep 2024 20:37:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 30 Sep 2024 20:35:57 +0000   Mon, 30 Sep 2024 20:37:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    multinode-103579-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 948c8778bf224e5dbd87a5ddef6634c2
	  System UUID:                948c8778-bf22-4e5d-bd87-a5ddef6634c2
	  Boot ID:                    3d6a4ceb-0a59-4f88-9395-96b39349ec4f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7tbhk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kindnet-phlcl              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m59s
	  kube-system                 kube-proxy-b9f89           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m22s                  kube-proxy       
	  Normal  Starting                 9m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m59s (x2 over 9m59s)  kubelet          Node multinode-103579-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m59s (x2 over 9m59s)  kubelet          Node multinode-103579-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m59s (x2 over 9m59s)  kubelet          Node multinode-103579-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m38s                  kubelet          Node multinode-103579-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m28s (x2 over 3m28s)  kubelet          Node multinode-103579-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m28s (x2 over 3m28s)  kubelet          Node multinode-103579-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m28s (x2 over 3m28s)  kubelet          Node multinode-103579-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m8s                   kubelet          Node multinode-103579-m02 status is now: NodeReady
	  Normal  NodeNotReady             106s                   node-controller  Node multinode-103579-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.056585] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060471] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.162790] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.140968] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.279412] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.836318] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.430478] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.066590] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.994488] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.084614] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.677350] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[  +0.100957] kauditd_printk_skb: 18 callbacks suppressed
	[Sep30 20:28] kauditd_printk_skb: 69 callbacks suppressed
	[Sep30 20:29] kauditd_printk_skb: 12 callbacks suppressed
	[Sep30 20:34] systemd-fstab-generator[2650]: Ignoring "noauto" option for root device
	[  +0.151813] systemd-fstab-generator[2662]: Ignoring "noauto" option for root device
	[  +0.179087] systemd-fstab-generator[2682]: Ignoring "noauto" option for root device
	[  +0.147072] systemd-fstab-generator[2694]: Ignoring "noauto" option for root device
	[  +0.272256] systemd-fstab-generator[2722]: Ignoring "noauto" option for root device
	[  +0.662658] systemd-fstab-generator[2815]: Ignoring "noauto" option for root device
	[  +2.058270] systemd-fstab-generator[2934]: Ignoring "noauto" option for root device
	[  +4.708226] kauditd_printk_skb: 184 callbacks suppressed
	[Sep30 20:35] systemd-fstab-generator[3784]: Ignoring "noauto" option for root device
	[  +0.100078] kauditd_printk_skb: 34 callbacks suppressed
	[ +16.932038] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [25b434fd4ab00363a4e33c578eacb078c2d21fe3261e459bf946aab36e52e306] <==
	{"level":"info","ts":"2024-09-30T20:27:33.508617Z","caller":"traceutil/trace.go:171","msg":"trace[1152036888] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:1; }","duration":"135.123297ms","start":"2024-09-30T20:27:33.373484Z","end":"2024-09-30T20:27:33.508607Z","steps":["trace[1152036888] 'range keys from in-memory index tree'  (duration: 126.463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T20:27:33.500091Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.658202ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-09-30T20:27:33.508875Z","caller":"traceutil/trace.go:171","msg":"trace[2324633] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:1; }","duration":"117.439373ms","start":"2024-09-30T20:27:33.391429Z","end":"2024-09-30T20:27:33.508868Z","steps":["trace[2324633] 'count revisions from in-memory index tree'  (duration: 108.625259ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T20:27:33.500117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.749116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-09-30T20:27:33.509195Z","caller":"traceutil/trace.go:171","msg":"trace[1077031957] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:1; }","duration":"117.82279ms","start":"2024-09-30T20:27:33.391364Z","end":"2024-09-30T20:27:33.509187Z","steps":["trace[1077031957] 'range keys from in-memory index tree'  (duration: 108.644751ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:27:47.808705Z","caller":"traceutil/trace.go:171","msg":"trace[1664084647] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"112.203367ms","start":"2024-09-30T20:27:47.696483Z","end":"2024-09-30T20:27:47.808686Z","steps":["trace[1664084647] 'process raft request'  (duration: 111.812806ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:28:55.785866Z","caller":"traceutil/trace.go:171","msg":"trace[714959990] linearizableReadLoop","detail":"{readStateIndex:474; appliedIndex:473; }","duration":"129.973954ms","start":"2024-09-30T20:28:55.655862Z","end":"2024-09-30T20:28:55.785836Z","steps":["trace[714959990] 'read index received'  (duration: 111.634105ms)","trace[714959990] 'applied index is now lower than readState.Index'  (duration: 18.338709ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T20:28:55.786742Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.871296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-103579-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T20:28:55.786820Z","caller":"traceutil/trace.go:171","msg":"trace[1690217488] range","detail":"{range_begin:/registry/minions/multinode-103579-m02; range_end:; response_count:0; response_revision:446; }","duration":"130.970696ms","start":"2024-09-30T20:28:55.655840Z","end":"2024-09-30T20:28:55.786811Z","steps":["trace[1690217488] 'agreement among raft nodes before linearized reading'  (duration: 130.835675ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:29:01.495374Z","caller":"traceutil/trace.go:171","msg":"trace[268712338] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"118.074643ms","start":"2024-09-30T20:29:01.377282Z","end":"2024-09-30T20:29:01.495357Z","steps":["trace[268712338] 'process raft request'  (duration: 117.726637ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:29:51.336810Z","caller":"traceutil/trace.go:171","msg":"trace[491805500] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"105.450132ms","start":"2024-09-30T20:29:51.231269Z","end":"2024-09-30T20:29:51.336719Z","steps":["trace[491805500] 'process raft request'  (duration: 105.325029ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:29:51.463179Z","caller":"traceutil/trace.go:171","msg":"trace[759165473] linearizableReadLoop","detail":"{readStateIndex:625; appliedIndex:624; }","duration":"117.57861ms","start":"2024-09-30T20:29:51.345586Z","end":"2024-09-30T20:29:51.463165Z","steps":["trace[759165473] 'read index received'  (duration: 115.553229ms)","trace[759165473] 'applied index is now lower than readState.Index'  (duration: 2.024908ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T20:29:51.463301Z","caller":"traceutil/trace.go:171","msg":"trace[671129766] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"121.793555ms","start":"2024-09-30T20:29:51.341498Z","end":"2024-09-30T20:29:51.463292Z","steps":["trace[671129766] 'process raft request'  (duration: 119.687126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T20:29:51.463773Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.03568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-103579-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T20:29:51.463839Z","caller":"traceutil/trace.go:171","msg":"trace[489517102] range","detail":"{range_begin:/registry/minions/multinode-103579-m03; range_end:; response_count:0; response_revision:584; }","duration":"118.24453ms","start":"2024-09-30T20:29:51.345583Z","end":"2024-09-30T20:29:51.463827Z","steps":["trace[489517102] 'agreement among raft nodes before linearized reading'  (duration: 117.977029ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T20:33:06.073482Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-30T20:33:06.073618Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-103579","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.58:2380"],"advertise-client-urls":["https://192.168.39.58:2379"]}
	{"level":"warn","ts":"2024-09-30T20:33:06.073752Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:33:06.073862Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:33:06.159705Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.58:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:33:06.159857Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.58:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-30T20:33:06.160235Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ded7f9817c909548","current-leader-member-id":"ded7f9817c909548"}
	{"level":"info","ts":"2024-09-30T20:33:06.163247Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-09-30T20:33:06.163463Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-09-30T20:33:06.163526Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-103579","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.58:2380"],"advertise-client-urls":["https://192.168.39.58:2379"]}
	
	
	==> etcd [6186b2a7a37ce6735a472a1591ff4137e2c1299aae5d9317852e7dfa79aaacd9] <==
	{"level":"info","ts":"2024-09-30T20:34:42.179353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 switched to configuration voters=(16057577330948740424)"}
	{"level":"info","ts":"2024-09-30T20:34:42.190613Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","added-peer-id":"ded7f9817c909548","added-peer-peer-urls":["https://192.168.39.58:2380"]}
	{"level":"info","ts":"2024-09-30T20:34:42.191044Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T20:34:42.191133Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T20:34:42.200716Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-30T20:34:42.223805Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ded7f9817c909548","initial-advertise-peer-urls":["https://192.168.39.58:2380"],"listen-peer-urls":["https://192.168.39.58:2380"],"advertise-client-urls":["https://192.168.39.58:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.58:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-30T20:34:42.224058Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-30T20:34:42.203159Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-09-30T20:34:42.228521Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-09-30T20:34:43.384042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T20:34:43.384098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T20:34:43.384144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgPreVoteResp from ded7f9817c909548 at term 2"}
	{"level":"info","ts":"2024-09-30T20:34:43.384162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became candidate at term 3"}
	{"level":"info","ts":"2024-09-30T20:34:43.384167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgVoteResp from ded7f9817c909548 at term 3"}
	{"level":"info","ts":"2024-09-30T20:34:43.384176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became leader at term 3"}
	{"level":"info","ts":"2024-09-30T20:34:43.384184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ded7f9817c909548 elected leader ded7f9817c909548 at term 3"}
	{"level":"info","ts":"2024-09-30T20:34:43.391160Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ded7f9817c909548","local-member-attributes":"{Name:multinode-103579 ClientURLs:[https://192.168.39.58:2379]}","request-path":"/0/members/ded7f9817c909548/attributes","cluster-id":"91c640bc00cd2aea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T20:34:43.391410Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:34:43.391866Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:34:43.392791Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:34:43.393351Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T20:34:43.393395Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T20:34:43.393837Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T20:34:43.394596Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:34:43.395483Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.58:2379"}
	
	
	==> kernel <==
	 20:38:54 up 11 min,  0 users,  load average: 0.24, 0.13, 0.09
	Linux multinode-103579 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bbb6c1c361bf4cf527748bfa59bca94dacb6779c506eef5330be08ee680de5d8] <==
	I0930 20:37:46.515599       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:37:56.522514       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:37:56.522583       1 main.go:299] handling current node
	I0930 20:37:56.522607       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:37:56.522615       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:38:06.519273       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:38:06.519388       1 main.go:299] handling current node
	I0930 20:38:06.519427       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:38:06.519434       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:38:16.516357       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:38:16.516509       1 main.go:299] handling current node
	I0930 20:38:16.516562       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:38:16.516587       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:38:26.521174       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:38:26.521294       1 main.go:299] handling current node
	I0930 20:38:26.521346       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:38:26.521365       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:38:36.523764       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:38:36.523887       1 main.go:299] handling current node
	I0930 20:38:36.523919       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:38:36.524049       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:38:46.514540       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:38:46.514646       1 main.go:299] handling current node
	I0930 20:38:46.514676       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:38:46.514693       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [cacfb622468b005a23952888b905e40fd74281c9335143ceeb7ea71797aa3bed] <==
	I0930 20:32:24.415308       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.5.0/24] 
	I0930 20:32:34.406844       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:32:34.407011       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:32:34.407172       1 main.go:295] Handling node with IPs: map[192.168.39.237:{}]
	I0930 20:32:34.407198       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.5.0/24] 
	I0930 20:32:34.407268       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:32:34.407290       1 main.go:299] handling current node
	I0930 20:32:44.407318       1 main.go:295] Handling node with IPs: map[192.168.39.237:{}]
	I0930 20:32:44.407424       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.5.0/24] 
	I0930 20:32:44.407571       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:32:44.407604       1 main.go:299] handling current node
	I0930 20:32:44.407627       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:32:44.407653       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:32:54.410730       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:32:54.410842       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	I0930 20:32:54.411066       1 main.go:295] Handling node with IPs: map[192.168.39.237:{}]
	I0930 20:32:54.411097       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.5.0/24] 
	I0930 20:32:54.411190       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:32:54.411222       1 main.go:299] handling current node
	I0930 20:33:04.415180       1 main.go:295] Handling node with IPs: map[192.168.39.237:{}]
	I0930 20:33:04.415329       1 main.go:322] Node multinode-103579-m03 has CIDR [10.244.5.0/24] 
	I0930 20:33:04.415476       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 20:33:04.415501       1 main.go:299] handling current node
	I0930 20:33:04.415527       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I0930 20:33:04.415543       1 main.go:322] Node multinode-103579-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [80432178b988bc0350374fa988e6b8ce6388ba0c6ee71b8272138b689ab81863] <==
	I0930 20:27:36.905795       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 20:27:37.501803       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 20:27:37.516512       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0930 20:27:37.531904       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 20:27:42.408194       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0930 20:27:42.659767       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0930 20:29:24.409411       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35044: use of closed network connection
	E0930 20:29:24.577200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35064: use of closed network connection
	E0930 20:29:24.743619       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35070: use of closed network connection
	E0930 20:29:24.915054       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35090: use of closed network connection
	E0930 20:29:25.080858       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35120: use of closed network connection
	E0930 20:29:25.240632       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35142: use of closed network connection
	E0930 20:29:25.510701       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35172: use of closed network connection
	E0930 20:29:25.669211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35196: use of closed network connection
	E0930 20:29:25.844361       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35218: use of closed network connection
	E0930 20:29:26.020863       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:35238: use of closed network connection
	I0930 20:33:06.073538       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0930 20:33:06.082768       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.087339       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.087654       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.088705       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.089544       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.089583       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.089615       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:33:06.089648       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d8c696ac5ff6ecfcd8642f495e8c1946c568c1bebf2360280e1d4acc5ceaaba2] <==
	I0930 20:34:44.873745       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 20:34:44.873856       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 20:34:44.874587       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0930 20:34:44.907184       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 20:34:44.927013       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0930 20:34:44.927200       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0930 20:34:44.927345       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 20:34:44.929234       1 aggregator.go:171] initial CRD sync complete...
	I0930 20:34:44.929305       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 20:34:44.929329       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 20:34:44.929353       1 cache.go:39] Caches are synced for autoregister controller
	I0930 20:34:44.944033       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 20:34:44.944523       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 20:34:44.944556       1 policy_source.go:224] refreshing policies
	I0930 20:34:44.947541       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0930 20:34:44.958031       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0930 20:34:44.969478       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0930 20:34:45.776050       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0930 20:34:46.952915       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 20:34:47.078784       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 20:34:47.092293       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 20:34:47.186617       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0930 20:34:47.202018       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0930 20:34:48.187504       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 20:34:48.385684       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [52e58cad6d23e0be49e31a60ca54dad76f241fe59124086b531b42b93dd18e8a] <==
	I0930 20:36:05.451065       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m02"
	I0930 20:36:05.477232       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-103579-m03" podCIDRs=["10.244.2.0/24"]
	I0930 20:36:05.477273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:05.477311       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:05.857620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:06.218649       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:08.484713       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:15.709630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:23.608314       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m02"
	I0930 20:36:23.609885       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:23.623063       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:28.429994       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:28.550681       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:28.568188       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:29.211571       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:36:29.211879       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m02"
	I0930 20:37:08.445791       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m02"
	I0930 20:37:08.466869       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m02"
	I0930 20:37:08.481513       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.141483ms"
	I0930 20:37:08.483062       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.818µs"
	I0930 20:37:13.510185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m02"
	I0930 20:37:28.146378       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-ns772"
	I0930 20:37:28.176249       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-ns772"
	I0930 20:37:28.176337       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lpb89"
	I0930 20:37:28.205705       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lpb89"
	
	
	==> kube-controller-manager [bc4433f6912398db4cb88e66d4cb7193f26ce5c3706dcb711cb87b571a031711] <==
	I0930 20:30:40.775713       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m02"
	I0930 20:30:40.776023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:42.096599       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m02"
	I0930 20:30:42.096598       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-103579-m03\" does not exist"
	I0930 20:30:42.106726       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-103579-m03" podCIDRs=["10.244.5.0/24"]
	I0930 20:30:42.106855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:42.108179       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:42.125796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:42.315291       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:42.640057       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:46.968725       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:30:52.490337       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:31:00.522484       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:31:00.522566       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m03"
	I0930 20:31:00.537773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:31:01.925169       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:31:41.943495       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m02"
	I0930 20:31:41.944409       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-103579-m03"
	I0930 20:31:41.965115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m02"
	I0930 20:31:42.033932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.238546ms"
	I0930 20:31:42.035520       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="152.572µs"
	I0930 20:31:47.016344       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:31:47.033328       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	I0930 20:31:47.052045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m02"
	I0930 20:31:57.130249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-103579-m03"
	
	
	==> kube-proxy [0974451661f0737436a583f454afc0982a4121c86e7d2d0334edbcd95bfecc78] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:27:44.781686       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 20:27:44.790585       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.58"]
	E0930 20:27:44.790802       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:27:44.820390       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:27:44.820432       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:27:44.820455       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:27:44.823622       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:27:44.824233       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:27:44.824304       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:27:44.825819       1 config.go:199] "Starting service config controller"
	I0930 20:27:44.825857       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:27:44.825902       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:27:44.825907       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:27:44.826338       1 config.go:328] "Starting node config controller"
	I0930 20:27:44.826413       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:27:44.926895       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 20:27:44.927057       1 shared_informer.go:320] Caches are synced for node config
	I0930 20:27:44.927065       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [49b216cdc18ef72b2e8c0cde275f96b74f5a451fea3294520dcc3a5ee59c0b93] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:34:45.851329       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 20:34:45.869450       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.58"]
	E0930 20:34:45.869584       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:34:45.927119       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:34:45.928217       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:34:45.928338       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:34:45.932141       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:34:45.932459       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:34:45.933245       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:34:45.936182       1 config.go:199] "Starting service config controller"
	I0930 20:34:45.936240       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:34:45.936278       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:34:45.936294       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:34:45.938798       1 config.go:328] "Starting node config controller"
	I0930 20:34:45.938829       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:34:46.036366       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 20:34:46.036431       1 shared_informer.go:320] Caches are synced for service config
	I0930 20:34:46.039357       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4a4131c58d0bf44d1303dc4391ae014e69c758eba279b4be21c3f4a473bed9d5] <==
	I0930 20:34:43.278737       1 serving.go:386] Generated self-signed cert in-memory
	W0930 20:34:44.821123       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 20:34:44.821259       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 20:34:44.821298       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 20:34:44.821369       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 20:34:44.866862       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 20:34:44.869028       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:34:44.878364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 20:34:44.878534       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 20:34:44.878583       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 20:34:44.878606       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 20:34:44.981396       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9596d6363e892d96ae7a53ca5a2dc7604d41239cb1f8bcc396dc8768356be785] <==
	E0930 20:27:35.754082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:35.964703       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 20:27:35.964781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.019316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 20:27:36.019367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.083612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 20:27:36.083668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.147678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 20:27:36.147765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.173415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0930 20:27:36.173539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.195722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 20:27:36.195823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.210467       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 20:27:36.210650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.239904       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 20:27:36.239942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.260325       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 20:27:36.260543       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 20:27:36.293251       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 20:27:36.293551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:27:36.322286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 20:27:36.322365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 20:27:39.524616       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 20:33:06.083774       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 30 20:37:41 multinode-103579 kubelet[2941]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 20:37:41 multinode-103579 kubelet[2941]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 20:37:41 multinode-103579 kubelet[2941]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 20:37:41 multinode-103579 kubelet[2941]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 20:37:41 multinode-103579 kubelet[2941]: E0930 20:37:41.062024    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728661061642063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:37:41 multinode-103579 kubelet[2941]: E0930 20:37:41.062053    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728661061642063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:37:51 multinode-103579 kubelet[2941]: E0930 20:37:51.063404    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728671063125294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:37:51 multinode-103579 kubelet[2941]: E0930 20:37:51.063442    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728671063125294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:38:01 multinode-103579 kubelet[2941]: E0930 20:38:01.066853    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728681066368390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:38:01 multinode-103579 kubelet[2941]: E0930 20:38:01.066889    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728681066368390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:38:11 multinode-103579 kubelet[2941]: E0930 20:38:11.073547    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728691071332325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:38:11 multinode-103579 kubelet[2941]: E0930 20:38:11.073714    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728691071332325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:38:21 multinode-103579 kubelet[2941]: E0930 20:38:21.079215    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728701078682648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:38:21 multinode-103579 kubelet[2941]: E0930 20:38:21.079246    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728701078682648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:38:31 multinode-103579 kubelet[2941]: E0930 20:38:31.080700    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728711080283837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:38:31 multinode-103579 kubelet[2941]: E0930 20:38:31.081170    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728711080283837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:38:41 multinode-103579 kubelet[2941]: E0930 20:38:41.029597    2941 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 20:38:41 multinode-103579 kubelet[2941]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 20:38:41 multinode-103579 kubelet[2941]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 20:38:41 multinode-103579 kubelet[2941]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 20:38:41 multinode-103579 kubelet[2941]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 20:38:41 multinode-103579 kubelet[2941]: E0930 20:38:41.083548    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728721082912805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:38:41 multinode-103579 kubelet[2941]: E0930 20:38:41.083573    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728721082912805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:38:51 multinode-103579 kubelet[2941]: E0930 20:38:51.084903    2941 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728731084628490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:38:51 multinode-103579 kubelet[2941]: E0930 20:38:51.084927    2941 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727728731084628490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 20:38:53.436118   46411 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19736-7672/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-103579 -n multinode-103579
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-103579 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.60s)

                                                
                                    
x
+
TestPreload (172.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-409125 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0930 20:43:28.936516   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-409125 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m31.560473643s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-409125 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-409125 image pull gcr.io/k8s-minikube/busybox: (3.343549739s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-409125
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-409125: (6.59338066s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-409125 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-409125 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m8.627082779s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-409125 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-30 20:45:32.71909518 +0000 UTC m=+4053.423851302
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-409125 -n test-preload-409125
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-409125 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-409125 logs -n 25: (1.066194077s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n multinode-103579 sudo cat                                       | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | /home/docker/cp-test_multinode-103579-m03_multinode-103579.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-103579 cp multinode-103579-m03:/home/docker/cp-test.txt                       | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m02:/home/docker/cp-test_multinode-103579-m03_multinode-103579-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n                                                                 | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | multinode-103579-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-103579 ssh -n multinode-103579-m02 sudo cat                                   | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	|         | /home/docker/cp-test_multinode-103579-m03_multinode-103579-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-103579 node stop m03                                                          | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:30 UTC |
	| node    | multinode-103579 node start                                                             | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:30 UTC | 30 Sep 24 20:31 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-103579                                                                | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:31 UTC |                     |
	| stop    | -p multinode-103579                                                                     | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:31 UTC |                     |
	| start   | -p multinode-103579                                                                     | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:33 UTC | 30 Sep 24 20:36 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-103579                                                                | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:36 UTC |                     |
	| node    | multinode-103579 node delete                                                            | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:36 UTC | 30 Sep 24 20:36 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-103579 stop                                                                   | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:36 UTC |                     |
	| start   | -p multinode-103579                                                                     | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:38 UTC | 30 Sep 24 20:41 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-103579                                                                | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:41 UTC |                     |
	| start   | -p multinode-103579-m02                                                                 | multinode-103579-m02 | jenkins | v1.34.0 | 30 Sep 24 20:41 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-103579-m03                                                                 | multinode-103579-m03 | jenkins | v1.34.0 | 30 Sep 24 20:41 UTC | 30 Sep 24 20:42 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-103579                                                                 | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:42 UTC |                     |
	| delete  | -p multinode-103579-m03                                                                 | multinode-103579-m03 | jenkins | v1.34.0 | 30 Sep 24 20:42 UTC | 30 Sep 24 20:42 UTC |
	| delete  | -p multinode-103579                                                                     | multinode-103579     | jenkins | v1.34.0 | 30 Sep 24 20:42 UTC | 30 Sep 24 20:42 UTC |
	| start   | -p test-preload-409125                                                                  | test-preload-409125  | jenkins | v1.34.0 | 30 Sep 24 20:42 UTC | 30 Sep 24 20:44 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-409125 image pull                                                          | test-preload-409125  | jenkins | v1.34.0 | 30 Sep 24 20:44 UTC | 30 Sep 24 20:44 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-409125                                                                  | test-preload-409125  | jenkins | v1.34.0 | 30 Sep 24 20:44 UTC | 30 Sep 24 20:44 UTC |
	| start   | -p test-preload-409125                                                                  | test-preload-409125  | jenkins | v1.34.0 | 30 Sep 24 20:44 UTC | 30 Sep 24 20:45 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-409125 image list                                                          | test-preload-409125  | jenkins | v1.34.0 | 30 Sep 24 20:45 UTC | 30 Sep 24 20:45 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 20:44:23
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 20:44:23.922317   48795 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:44:23.922512   48795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:44:23.922523   48795 out.go:358] Setting ErrFile to fd 2...
	I0930 20:44:23.922531   48795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:44:23.922714   48795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:44:23.923305   48795 out.go:352] Setting JSON to false
	I0930 20:44:23.924359   48795 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5207,"bootTime":1727723857,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 20:44:23.924462   48795 start.go:139] virtualization: kvm guest
	I0930 20:44:23.926685   48795 out.go:177] * [test-preload-409125] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 20:44:23.928089   48795 notify.go:220] Checking for updates...
	I0930 20:44:23.928165   48795 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 20:44:23.929714   48795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 20:44:23.931274   48795 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:44:23.932870   48795 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:44:23.934345   48795 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 20:44:23.935672   48795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 20:44:23.937373   48795 config.go:182] Loaded profile config "test-preload-409125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0930 20:44:23.937800   48795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:44:23.937864   48795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:44:23.952622   48795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0930 20:44:23.953161   48795 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:44:23.953709   48795 main.go:141] libmachine: Using API Version  1
	I0930 20:44:23.953727   48795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:44:23.954075   48795 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:44:23.954251   48795 main.go:141] libmachine: (test-preload-409125) Calling .DriverName
	I0930 20:44:23.956055   48795 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 20:44:23.957442   48795 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 20:44:23.957746   48795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:44:23.957787   48795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:44:23.972563   48795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38099
	I0930 20:44:23.973001   48795 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:44:23.973539   48795 main.go:141] libmachine: Using API Version  1
	I0930 20:44:23.973572   48795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:44:23.973905   48795 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:44:23.974149   48795 main.go:141] libmachine: (test-preload-409125) Calling .DriverName
	I0930 20:44:24.009101   48795 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 20:44:24.010286   48795 start.go:297] selected driver: kvm2
	I0930 20:44:24.010304   48795 start.go:901] validating driver "kvm2" against &{Name:test-preload-409125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-409125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:44:24.010408   48795 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 20:44:24.011073   48795 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:44:24.011140   48795 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 20:44:24.026568   48795 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 20:44:24.026913   48795 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:44:24.026943   48795 cni.go:84] Creating CNI manager for ""
	I0930 20:44:24.026985   48795 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:44:24.027055   48795 start.go:340] cluster config:
	{Name:test-preload-409125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-409125 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:44:24.027165   48795 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:44:24.029155   48795 out.go:177] * Starting "test-preload-409125" primary control-plane node in "test-preload-409125" cluster
	I0930 20:44:24.030916   48795 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0930 20:44:24.240942   48795 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0930 20:44:24.240972   48795 cache.go:56] Caching tarball of preloaded images
	I0930 20:44:24.241129   48795 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0930 20:44:24.243506   48795 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0930 20:44:24.245024   48795 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0930 20:44:24.350471   48795 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0930 20:44:35.103396   48795 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0930 20:44:35.103500   48795 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0930 20:44:35.943886   48795 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0930 20:44:35.944016   48795 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125/config.json ...
	I0930 20:44:35.944261   48795 start.go:360] acquireMachinesLock for test-preload-409125: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:44:35.944326   48795 start.go:364] duration metric: took 43.248µs to acquireMachinesLock for "test-preload-409125"
	I0930 20:44:35.944345   48795 start.go:96] Skipping create...Using existing machine configuration
	I0930 20:44:35.944353   48795 fix.go:54] fixHost starting: 
	I0930 20:44:35.944624   48795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:44:35.944664   48795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:44:35.960174   48795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0930 20:44:35.960655   48795 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:44:35.961231   48795 main.go:141] libmachine: Using API Version  1
	I0930 20:44:35.961282   48795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:44:35.961614   48795 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:44:35.961832   48795 main.go:141] libmachine: (test-preload-409125) Calling .DriverName
	I0930 20:44:35.961972   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetState
	I0930 20:44:35.963996   48795 fix.go:112] recreateIfNeeded on test-preload-409125: state=Stopped err=<nil>
	I0930 20:44:35.964021   48795 main.go:141] libmachine: (test-preload-409125) Calling .DriverName
	W0930 20:44:35.964173   48795 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 20:44:35.966230   48795 out.go:177] * Restarting existing kvm2 VM for "test-preload-409125" ...
	I0930 20:44:35.967814   48795 main.go:141] libmachine: (test-preload-409125) Calling .Start
	I0930 20:44:35.968023   48795 main.go:141] libmachine: (test-preload-409125) Ensuring networks are active...
	I0930 20:44:35.968746   48795 main.go:141] libmachine: (test-preload-409125) Ensuring network default is active
	I0930 20:44:35.969060   48795 main.go:141] libmachine: (test-preload-409125) Ensuring network mk-test-preload-409125 is active
	I0930 20:44:35.969425   48795 main.go:141] libmachine: (test-preload-409125) Getting domain xml...
	I0930 20:44:35.970167   48795 main.go:141] libmachine: (test-preload-409125) Creating domain...
	I0930 20:44:37.185655   48795 main.go:141] libmachine: (test-preload-409125) Waiting to get IP...
	I0930 20:44:37.186839   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:37.187226   48795 main.go:141] libmachine: (test-preload-409125) DBG | unable to find current IP address of domain test-preload-409125 in network mk-test-preload-409125
	I0930 20:44:37.187310   48795 main.go:141] libmachine: (test-preload-409125) DBG | I0930 20:44:37.187210   48862 retry.go:31] will retry after 276.447833ms: waiting for machine to come up
	I0930 20:44:37.465792   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:37.466221   48795 main.go:141] libmachine: (test-preload-409125) DBG | unable to find current IP address of domain test-preload-409125 in network mk-test-preload-409125
	I0930 20:44:37.466243   48795 main.go:141] libmachine: (test-preload-409125) DBG | I0930 20:44:37.466175   48862 retry.go:31] will retry after 295.153112ms: waiting for machine to come up
	I0930 20:44:37.762704   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:37.763162   48795 main.go:141] libmachine: (test-preload-409125) DBG | unable to find current IP address of domain test-preload-409125 in network mk-test-preload-409125
	I0930 20:44:37.763183   48795 main.go:141] libmachine: (test-preload-409125) DBG | I0930 20:44:37.763122   48862 retry.go:31] will retry after 320.40857ms: waiting for machine to come up
	I0930 20:44:38.084561   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:38.084962   48795 main.go:141] libmachine: (test-preload-409125) DBG | unable to find current IP address of domain test-preload-409125 in network mk-test-preload-409125
	I0930 20:44:38.084992   48795 main.go:141] libmachine: (test-preload-409125) DBG | I0930 20:44:38.084913   48862 retry.go:31] will retry after 518.82335ms: waiting for machine to come up
	I0930 20:44:38.605795   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:38.606259   48795 main.go:141] libmachine: (test-preload-409125) DBG | unable to find current IP address of domain test-preload-409125 in network mk-test-preload-409125
	I0930 20:44:38.606287   48795 main.go:141] libmachine: (test-preload-409125) DBG | I0930 20:44:38.606207   48862 retry.go:31] will retry after 508.569893ms: waiting for machine to come up
	I0930 20:44:39.116079   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:39.116573   48795 main.go:141] libmachine: (test-preload-409125) DBG | unable to find current IP address of domain test-preload-409125 in network mk-test-preload-409125
	I0930 20:44:39.116595   48795 main.go:141] libmachine: (test-preload-409125) DBG | I0930 20:44:39.116530   48862 retry.go:31] will retry after 657.533865ms: waiting for machine to come up
	I0930 20:44:39.775431   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:39.775892   48795 main.go:141] libmachine: (test-preload-409125) DBG | unable to find current IP address of domain test-preload-409125 in network mk-test-preload-409125
	I0930 20:44:39.775918   48795 main.go:141] libmachine: (test-preload-409125) DBG | I0930 20:44:39.775846   48862 retry.go:31] will retry after 715.80619ms: waiting for machine to come up
	I0930 20:44:40.492836   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:40.493249   48795 main.go:141] libmachine: (test-preload-409125) DBG | unable to find current IP address of domain test-preload-409125 in network mk-test-preload-409125
	I0930 20:44:40.493276   48795 main.go:141] libmachine: (test-preload-409125) DBG | I0930 20:44:40.493201   48862 retry.go:31] will retry after 1.455017778s: waiting for machine to come up
	I0930 20:44:41.950484   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:41.951149   48795 main.go:141] libmachine: (test-preload-409125) DBG | unable to find current IP address of domain test-preload-409125 in network mk-test-preload-409125
	I0930 20:44:41.951180   48795 main.go:141] libmachine: (test-preload-409125) DBG | I0930 20:44:41.951091   48862 retry.go:31] will retry after 1.848607678s: waiting for machine to come up
	I0930 20:44:43.800950   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:43.801479   48795 main.go:141] libmachine: (test-preload-409125) DBG | unable to find current IP address of domain test-preload-409125 in network mk-test-preload-409125
	I0930 20:44:43.801520   48795 main.go:141] libmachine: (test-preload-409125) DBG | I0930 20:44:43.801420   48862 retry.go:31] will retry after 1.449729581s: waiting for machine to come up
	I0930 20:44:45.253154   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:45.253612   48795 main.go:141] libmachine: (test-preload-409125) DBG | unable to find current IP address of domain test-preload-409125 in network mk-test-preload-409125
	I0930 20:44:45.253638   48795 main.go:141] libmachine: (test-preload-409125) DBG | I0930 20:44:45.253563   48862 retry.go:31] will retry after 2.532699358s: waiting for machine to come up
	I0930 20:44:47.788719   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:47.789150   48795 main.go:141] libmachine: (test-preload-409125) DBG | unable to find current IP address of domain test-preload-409125 in network mk-test-preload-409125
	I0930 20:44:47.789172   48795 main.go:141] libmachine: (test-preload-409125) DBG | I0930 20:44:47.789117   48862 retry.go:31] will retry after 2.217772485s: waiting for machine to come up
	I0930 20:44:50.009601   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:50.009995   48795 main.go:141] libmachine: (test-preload-409125) DBG | unable to find current IP address of domain test-preload-409125 in network mk-test-preload-409125
	I0930 20:44:50.010028   48795 main.go:141] libmachine: (test-preload-409125) DBG | I0930 20:44:50.009958   48862 retry.go:31] will retry after 3.733831852s: waiting for machine to come up
	I0930 20:44:53.747160   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:53.747578   48795 main.go:141] libmachine: (test-preload-409125) Found IP for machine: 192.168.39.127
	I0930 20:44:53.747620   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has current primary IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:53.747629   48795 main.go:141] libmachine: (test-preload-409125) Reserving static IP address...
	I0930 20:44:53.748109   48795 main.go:141] libmachine: (test-preload-409125) Reserved static IP address: 192.168.39.127
	I0930 20:44:53.748143   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "test-preload-409125", mac: "52:54:00:3f:ed:69", ip: "192.168.39.127"} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:53.748157   48795 main.go:141] libmachine: (test-preload-409125) Waiting for SSH to be available...
	I0930 20:44:53.748176   48795 main.go:141] libmachine: (test-preload-409125) DBG | skip adding static IP to network mk-test-preload-409125 - found existing host DHCP lease matching {name: "test-preload-409125", mac: "52:54:00:3f:ed:69", ip: "192.168.39.127"}
	I0930 20:44:53.748185   48795 main.go:141] libmachine: (test-preload-409125) DBG | Getting to WaitForSSH function...
	I0930 20:44:53.750200   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:53.750649   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:53.750682   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:53.750821   48795 main.go:141] libmachine: (test-preload-409125) DBG | Using SSH client type: external
	I0930 20:44:53.750846   48795 main.go:141] libmachine: (test-preload-409125) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/test-preload-409125/id_rsa (-rw-------)
	I0930 20:44:53.750880   48795 main.go:141] libmachine: (test-preload-409125) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/test-preload-409125/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 20:44:53.750890   48795 main.go:141] libmachine: (test-preload-409125) DBG | About to run SSH command:
	I0930 20:44:53.750898   48795 main.go:141] libmachine: (test-preload-409125) DBG | exit 0
	I0930 20:44:53.875571   48795 main.go:141] libmachine: (test-preload-409125) DBG | SSH cmd err, output: <nil>: 
	I0930 20:44:53.875922   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetConfigRaw
	I0930 20:44:53.876541   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetIP
	I0930 20:44:53.878902   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:53.879253   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:53.879285   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:53.879629   48795 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125/config.json ...
	I0930 20:44:53.879829   48795 machine.go:93] provisionDockerMachine start ...
	I0930 20:44:53.879845   48795 main.go:141] libmachine: (test-preload-409125) Calling .DriverName
	I0930 20:44:53.880053   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHHostname
	I0930 20:44:53.882260   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:53.882539   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:53.882566   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:53.882705   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHPort
	I0930 20:44:53.882867   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:53.883034   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:53.883153   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHUsername
	I0930 20:44:53.883327   48795 main.go:141] libmachine: Using SSH client type: native
	I0930 20:44:53.883564   48795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0930 20:44:53.883578   48795 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 20:44:53.987906   48795 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 20:44:53.987938   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetMachineName
	I0930 20:44:53.988143   48795 buildroot.go:166] provisioning hostname "test-preload-409125"
	I0930 20:44:53.988165   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetMachineName
	I0930 20:44:53.988323   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHHostname
	I0930 20:44:53.991212   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:53.991632   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:53.991664   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:53.991914   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHPort
	I0930 20:44:53.992106   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:53.992289   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:53.992425   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHUsername
	I0930 20:44:53.992607   48795 main.go:141] libmachine: Using SSH client type: native
	I0930 20:44:53.992781   48795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0930 20:44:53.992793   48795 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-409125 && echo "test-preload-409125" | sudo tee /etc/hostname
	I0930 20:44:54.115477   48795 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-409125
	
	I0930 20:44:54.115510   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHHostname
	I0930 20:44:54.118347   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.118746   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:54.118778   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.118961   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHPort
	I0930 20:44:54.119124   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:54.119259   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:54.119385   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHUsername
	I0930 20:44:54.119502   48795 main.go:141] libmachine: Using SSH client type: native
	I0930 20:44:54.119701   48795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0930 20:44:54.119720   48795 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-409125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-409125/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-409125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:44:54.234068   48795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:44:54.234092   48795 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:44:54.234112   48795 buildroot.go:174] setting up certificates
	I0930 20:44:54.234121   48795 provision.go:84] configureAuth start
	I0930 20:44:54.234132   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetMachineName
	I0930 20:44:54.234391   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetIP
	I0930 20:44:54.237089   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.237450   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:54.237471   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.237645   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHHostname
	I0930 20:44:54.240042   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.240404   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:54.240433   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.240578   48795 provision.go:143] copyHostCerts
	I0930 20:44:54.240647   48795 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:44:54.240661   48795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:44:54.240741   48795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:44:54.241054   48795 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:44:54.241068   48795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:44:54.241118   48795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:44:54.241219   48795 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:44:54.241227   48795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:44:54.241294   48795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:44:54.241392   48795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.test-preload-409125 san=[127.0.0.1 192.168.39.127 localhost minikube test-preload-409125]
	I0930 20:44:54.320046   48795 provision.go:177] copyRemoteCerts
	I0930 20:44:54.320142   48795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:44:54.320171   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHHostname
	I0930 20:44:54.322885   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.323302   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:54.323336   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.323511   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHPort
	I0930 20:44:54.323757   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:54.323936   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHUsername
	I0930 20:44:54.324055   48795 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/test-preload-409125/id_rsa Username:docker}
	I0930 20:44:54.405342   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:44:54.429254   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0930 20:44:54.452703   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 20:44:54.475151   48795 provision.go:87] duration metric: took 241.018537ms to configureAuth
	I0930 20:44:54.475176   48795 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:44:54.475381   48795 config.go:182] Loaded profile config "test-preload-409125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0930 20:44:54.475464   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHHostname
	I0930 20:44:54.478070   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.478337   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:54.478367   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.478552   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHPort
	I0930 20:44:54.478773   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:54.478947   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:54.479085   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHUsername
	I0930 20:44:54.479271   48795 main.go:141] libmachine: Using SSH client type: native
	I0930 20:44:54.479440   48795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0930 20:44:54.479454   48795 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:44:54.693893   48795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:44:54.693922   48795 machine.go:96] duration metric: took 814.080912ms to provisionDockerMachine
	I0930 20:44:54.693935   48795 start.go:293] postStartSetup for "test-preload-409125" (driver="kvm2")
	I0930 20:44:54.693944   48795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:44:54.693969   48795 main.go:141] libmachine: (test-preload-409125) Calling .DriverName
	I0930 20:44:54.694270   48795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:44:54.694296   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHHostname
	I0930 20:44:54.697130   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.697778   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:54.697807   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.697982   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHPort
	I0930 20:44:54.698196   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:54.698365   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHUsername
	I0930 20:44:54.698519   48795 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/test-preload-409125/id_rsa Username:docker}
	I0930 20:44:54.781908   48795 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:44:54.786017   48795 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:44:54.786038   48795 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:44:54.786107   48795 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:44:54.786183   48795 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:44:54.786293   48795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:44:54.795481   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:44:54.818271   48795 start.go:296] duration metric: took 124.321352ms for postStartSetup
	I0930 20:44:54.818315   48795 fix.go:56] duration metric: took 18.873959662s for fixHost
	I0930 20:44:54.818346   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHHostname
	I0930 20:44:54.821113   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.821437   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:54.821465   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.821585   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHPort
	I0930 20:44:54.821773   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:54.821925   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:54.822063   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHUsername
	I0930 20:44:54.822243   48795 main.go:141] libmachine: Using SSH client type: native
	I0930 20:44:54.822415   48795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0930 20:44:54.822425   48795 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:44:54.924088   48795 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727729094.900407025
	
	I0930 20:44:54.924110   48795 fix.go:216] guest clock: 1727729094.900407025
	I0930 20:44:54.924117   48795 fix.go:229] Guest: 2024-09-30 20:44:54.900407025 +0000 UTC Remote: 2024-09-30 20:44:54.818331101 +0000 UTC m=+30.932024701 (delta=82.075924ms)
	I0930 20:44:54.924134   48795 fix.go:200] guest clock delta is within tolerance: 82.075924ms
	I0930 20:44:54.924139   48795 start.go:83] releasing machines lock for "test-preload-409125", held for 18.979802097s
	I0930 20:44:54.924155   48795 main.go:141] libmachine: (test-preload-409125) Calling .DriverName
	I0930 20:44:54.924444   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetIP
	I0930 20:44:54.927116   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.927499   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:54.927544   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.927716   48795 main.go:141] libmachine: (test-preload-409125) Calling .DriverName
	I0930 20:44:54.928224   48795 main.go:141] libmachine: (test-preload-409125) Calling .DriverName
	I0930 20:44:54.928406   48795 main.go:141] libmachine: (test-preload-409125) Calling .DriverName
	I0930 20:44:54.928493   48795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:44:54.928530   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHHostname
	I0930 20:44:54.928638   48795 ssh_runner.go:195] Run: cat /version.json
	I0930 20:44:54.928665   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHHostname
	I0930 20:44:54.931189   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.931549   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.931608   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:54.931637   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.931758   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHPort
	I0930 20:44:54.931924   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:54.931976   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:54.931997   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:54.932059   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHUsername
	I0930 20:44:54.932116   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHPort
	I0930 20:44:54.932196   48795 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/test-preload-409125/id_rsa Username:docker}
	I0930 20:44:54.932236   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:44:54.932371   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHUsername
	I0930 20:44:54.932504   48795 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/test-preload-409125/id_rsa Username:docker}
	I0930 20:44:55.045343   48795 ssh_runner.go:195] Run: systemctl --version
	I0930 20:44:55.051310   48795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:44:55.194567   48795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:44:55.200363   48795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:44:55.200440   48795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:44:55.216214   48795 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 20:44:55.216240   48795 start.go:495] detecting cgroup driver to use...
	I0930 20:44:55.216304   48795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:44:55.231388   48795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:44:55.245774   48795 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:44:55.245839   48795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:44:55.259920   48795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:44:55.274316   48795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:44:55.390841   48795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:44:55.562523   48795 docker.go:233] disabling docker service ...
	I0930 20:44:55.562599   48795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:44:55.577278   48795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:44:55.590662   48795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:44:55.721165   48795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:44:55.839500   48795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:44:55.852818   48795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:44:55.871344   48795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0930 20:44:55.871411   48795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:44:55.881966   48795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:44:55.882028   48795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:44:55.892797   48795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:44:55.903212   48795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:44:55.913847   48795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:44:55.925629   48795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:44:55.936681   48795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:44:55.954784   48795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:44:55.965918   48795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:44:55.976068   48795 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 20:44:55.976130   48795 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 20:44:55.990326   48795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:44:56.000662   48795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:44:56.129197   48795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:44:56.220697   48795 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:44:56.220769   48795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:44:56.225109   48795 start.go:563] Will wait 60s for crictl version
	I0930 20:44:56.225162   48795 ssh_runner.go:195] Run: which crictl
	I0930 20:44:56.228762   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:44:56.269081   48795 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:44:56.269173   48795 ssh_runner.go:195] Run: crio --version
	I0930 20:44:56.301656   48795 ssh_runner.go:195] Run: crio --version
	I0930 20:44:56.332523   48795 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0930 20:44:56.334180   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetIP
	I0930 20:44:56.337023   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:56.337456   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:44:56.337481   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:44:56.337765   48795 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:44:56.341725   48795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:44:56.353862   48795 kubeadm.go:883] updating cluster {Name:test-preload-409125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-409125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 20:44:56.353977   48795 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0930 20:44:56.354021   48795 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:44:56.393880   48795 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0930 20:44:56.393935   48795 ssh_runner.go:195] Run: which lz4
	I0930 20:44:56.397949   48795 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 20:44:56.402206   48795 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 20:44:56.402257   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0930 20:44:57.863462   48795 crio.go:462] duration metric: took 1.46557865s to copy over tarball
	I0930 20:44:57.863551   48795 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 20:45:00.334614   48795 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.471012009s)
	I0930 20:45:00.334641   48795 crio.go:469] duration metric: took 2.471152004s to extract the tarball
	I0930 20:45:00.334648   48795 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 20:45:00.376607   48795 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:45:00.417969   48795 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0930 20:45:00.418010   48795 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 20:45:00.418076   48795 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:45:00.418105   48795 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0930 20:45:00.418130   48795 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 20:45:00.418151   48795 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0930 20:45:00.418080   48795 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0930 20:45:00.418248   48795 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0930 20:45:00.418255   48795 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0930 20:45:00.418455   48795 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 20:45:00.419602   48795 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0930 20:45:00.419607   48795 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:45:00.419603   48795 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 20:45:00.419603   48795 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0930 20:45:00.419727   48795 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 20:45:00.419685   48795 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0930 20:45:00.419684   48795 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0930 20:45:00.419692   48795 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0930 20:45:00.644225   48795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0930 20:45:00.680826   48795 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0930 20:45:00.680890   48795 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0930 20:45:00.680935   48795 ssh_runner.go:195] Run: which crictl
	I0930 20:45:00.684871   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0930 20:45:00.720209   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0930 20:45:00.742530   48795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0930 20:45:00.742534   48795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0930 20:45:00.753785   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0930 20:45:00.754610   48795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0930 20:45:00.767093   48795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0930 20:45:00.772434   48795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0930 20:45:00.823984   48795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 20:45:00.841954   48795 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0930 20:45:00.842003   48795 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0930 20:45:00.842051   48795 ssh_runner.go:195] Run: which crictl
	I0930 20:45:00.851828   48795 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0930 20:45:00.851863   48795 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0930 20:45:00.851913   48795 ssh_runner.go:195] Run: which crictl
	I0930 20:45:00.886936   48795 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0930 20:45:00.887041   48795 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0930 20:45:00.896429   48795 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0930 20:45:00.896471   48795 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 20:45:00.896517   48795 ssh_runner.go:195] Run: which crictl
	I0930 20:45:00.901133   48795 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0930 20:45:00.901172   48795 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0930 20:45:00.901215   48795 ssh_runner.go:195] Run: which crictl
	I0930 20:45:00.916338   48795 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0930 20:45:00.916374   48795 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0930 20:45:00.916418   48795 ssh_runner.go:195] Run: which crictl
	I0930 20:45:00.941801   48795 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0930 20:45:00.941849   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0930 20:45:00.941851   48795 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 20:45:00.941889   48795 ssh_runner.go:195] Run: which crictl
	I0930 20:45:00.941954   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0930 20:45:00.941963   48795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0930 20:45:00.942027   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0930 20:45:00.942035   48795 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0930 20:45:00.942062   48795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0930 20:45:00.942128   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0930 20:45:00.942181   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0930 20:45:01.070022   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0930 20:45:01.597019   48795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:45:04.196709   48795 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (3.254720715s)
	I0930 20:45:04.196786   48795 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (3.254708692s)
	I0930 20:45:04.196803   48795 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0930 20:45:04.196817   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0930 20:45:04.196893   48795 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (3.25498894s)
	I0930 20:45:04.196948   48795 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (3.254804868s)
	I0930 20:45:04.196959   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0930 20:45:04.196993   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0930 20:45:04.197006   48795 ssh_runner.go:235] Completed: which crictl: (3.255100251s)
	I0930 20:45:04.197047   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 20:45:04.197051   48795 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (3.255004182s)
	I0930 20:45:04.197106   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0930 20:45:04.197120   48795 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (3.127067491s)
	I0930 20:45:04.197150   48795 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.600105503s)
	I0930 20:45:04.197162   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0930 20:45:04.320026   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0930 20:45:04.321895   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0930 20:45:04.321951   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 20:45:04.321977   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0930 20:45:04.322065   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0930 20:45:04.322118   48795 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0930 20:45:04.322188   48795 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0930 20:45:04.403024   48795 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0930 20:45:04.403141   48795 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0930 20:45:04.426244   48795 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0930 20:45:04.426378   48795 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0930 20:45:04.430162   48795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0930 20:45:04.430184   48795 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0930 20:45:04.430221   48795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0930 20:45:04.430283   48795 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0930 20:45:04.430364   48795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 20:45:04.430382   48795 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0930 20:45:04.430409   48795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0930 20:45:04.430452   48795 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0930 20:45:04.430384   48795 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0930 20:45:04.434660   48795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0930 20:45:04.895420   48795 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0930 20:45:04.895475   48795 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0930 20:45:04.895546   48795 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0930 20:45:04.895553   48795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0930 20:45:04.895605   48795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0930 20:45:04.895647   48795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0930 20:45:04.895656   48795 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0930 20:45:07.142779   48795 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.247182487s)
	I0930 20:45:07.142810   48795 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0930 20:45:07.142846   48795 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0930 20:45:07.142858   48795 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.247181801s)
	I0930 20:45:07.142882   48795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0930 20:45:07.142910   48795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0930 20:45:07.283181   48795 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0930 20:45:07.283248   48795 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0930 20:45:07.283325   48795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0930 20:45:07.631456   48795 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0930 20:45:07.631499   48795 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0930 20:45:07.631568   48795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0930 20:45:08.370585   48795 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0930 20:45:08.370641   48795 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0930 20:45:08.370715   48795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0930 20:45:09.024037   48795 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0930 20:45:09.024106   48795 cache_images.go:123] Successfully loaded all cached images
	I0930 20:45:09.024115   48795 cache_images.go:92] duration metric: took 8.606091157s to LoadCachedImages
	I0930 20:45:09.024129   48795 kubeadm.go:934] updating node { 192.168.39.127 8443 v1.24.4 crio true true} ...
	I0930 20:45:09.024251   48795 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-409125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-409125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:45:09.024321   48795 ssh_runner.go:195] Run: crio config
	I0930 20:45:09.071955   48795 cni.go:84] Creating CNI manager for ""
	I0930 20:45:09.071978   48795 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:45:09.071988   48795 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 20:45:09.072005   48795 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.127 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-409125 NodeName:test-preload-409125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.127"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 20:45:09.072129   48795 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.127
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-409125"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.127
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.127"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 20:45:09.072186   48795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0930 20:45:09.081654   48795 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 20:45:09.081721   48795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 20:45:09.090576   48795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0930 20:45:09.106442   48795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:45:09.121673   48795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0930 20:45:09.137982   48795 ssh_runner.go:195] Run: grep 192.168.39.127	control-plane.minikube.internal$ /etc/hosts
	I0930 20:45:09.141477   48795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.127	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:45:09.152962   48795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:45:09.275258   48795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:45:09.290737   48795 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125 for IP: 192.168.39.127
	I0930 20:45:09.290761   48795 certs.go:194] generating shared ca certs ...
	I0930 20:45:09.290781   48795 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:45:09.290974   48795 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:45:09.291038   48795 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:45:09.291052   48795 certs.go:256] generating profile certs ...
	I0930 20:45:09.291151   48795 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125/client.key
	I0930 20:45:09.291228   48795 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125/apiserver.key.20131fb1
	I0930 20:45:09.291282   48795 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125/proxy-client.key
	I0930 20:45:09.291469   48795 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:45:09.291516   48795 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:45:09.291547   48795 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:45:09.291583   48795 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:45:09.291616   48795 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:45:09.291647   48795 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:45:09.291715   48795 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:45:09.292471   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:45:09.326258   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:45:09.364306   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:45:09.392435   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:45:09.428745   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0930 20:45:09.459536   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 20:45:09.500059   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:45:09.524393   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 20:45:09.549028   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:45:09.575291   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:45:09.600182   48795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:45:09.623171   48795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 20:45:09.639317   48795 ssh_runner.go:195] Run: openssl version
	I0930 20:45:09.647150   48795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:45:09.659124   48795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:45:09.663584   48795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:45:09.663705   48795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:45:09.669365   48795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:45:09.679864   48795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:45:09.690352   48795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:45:09.694830   48795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:45:09.694878   48795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:45:09.700503   48795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:45:09.711557   48795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:45:09.722109   48795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:45:09.726249   48795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:45:09.726301   48795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:45:09.732100   48795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:45:09.744015   48795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:45:09.748456   48795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 20:45:09.754573   48795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 20:45:09.760566   48795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 20:45:09.766701   48795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 20:45:09.772772   48795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 20:45:09.778559   48795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 20:45:09.784752   48795 kubeadm.go:392] StartCluster: {Name:test-preload-409125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-409125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:45:09.784831   48795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 20:45:09.784876   48795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 20:45:09.831840   48795 cri.go:89] found id: ""
	I0930 20:45:09.831906   48795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 20:45:09.841845   48795 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 20:45:09.841868   48795 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 20:45:09.841911   48795 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 20:45:09.851772   48795 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 20:45:09.852194   48795 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-409125" does not appear in /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:45:09.852339   48795 kubeconfig.go:62] /home/jenkins/minikube-integration/19736-7672/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-409125" cluster setting kubeconfig missing "test-preload-409125" context setting]
	I0930 20:45:09.852665   48795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:45:09.853244   48795 kapi.go:59] client config for test-preload-409125: &rest.Config{Host:"https://192.168.39.127:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 20:45:09.853831   48795 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 20:45:09.863278   48795 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.127
	I0930 20:45:09.863309   48795 kubeadm.go:1160] stopping kube-system containers ...
	I0930 20:45:09.863320   48795 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 20:45:09.863368   48795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 20:45:09.898071   48795 cri.go:89] found id: ""
	I0930 20:45:09.898129   48795 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 20:45:09.913770   48795 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 20:45:09.924237   48795 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 20:45:09.924266   48795 kubeadm.go:157] found existing configuration files:
	
	I0930 20:45:09.924325   48795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 20:45:09.933669   48795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 20:45:09.933755   48795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 20:45:09.943260   48795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 20:45:09.952568   48795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 20:45:09.952624   48795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 20:45:09.962451   48795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 20:45:09.971479   48795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 20:45:09.971544   48795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 20:45:09.981028   48795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 20:45:09.990061   48795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 20:45:09.990128   48795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 20:45:09.999287   48795 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 20:45:10.008980   48795 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:45:10.097837   48795 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:45:10.743284   48795 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:45:10.991808   48795 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:45:11.051607   48795 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:45:11.121547   48795 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:45:11.121634   48795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:45:11.622047   48795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:45:12.122517   48795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:45:12.154534   48795 api_server.go:72] duration metric: took 1.032986741s to wait for apiserver process to appear ...
	I0930 20:45:12.154567   48795 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:45:12.154602   48795 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I0930 20:45:12.155238   48795 api_server.go:269] stopped: https://192.168.39.127:8443/healthz: Get "https://192.168.39.127:8443/healthz": dial tcp 192.168.39.127:8443: connect: connection refused
	I0930 20:45:12.654804   48795 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I0930 20:45:16.080833   48795 api_server.go:279] https://192.168.39.127:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 20:45:16.080877   48795 api_server.go:103] status: https://192.168.39.127:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 20:45:16.080892   48795 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I0930 20:45:16.133726   48795 api_server.go:279] https://192.168.39.127:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 20:45:16.133757   48795 api_server.go:103] status: https://192.168.39.127:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 20:45:16.155033   48795 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I0930 20:45:16.165384   48795 api_server.go:279] https://192.168.39.127:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 20:45:16.165451   48795 api_server.go:103] status: https://192.168.39.127:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 20:45:16.654751   48795 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I0930 20:45:16.660998   48795 api_server.go:279] https://192.168.39.127:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 20:45:16.661037   48795 api_server.go:103] status: https://192.168.39.127:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 20:45:17.155580   48795 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I0930 20:45:17.170090   48795 api_server.go:279] https://192.168.39.127:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 20:45:17.170137   48795 api_server.go:103] status: https://192.168.39.127:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 20:45:17.654684   48795 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I0930 20:45:17.661050   48795 api_server.go:279] https://192.168.39.127:8443/healthz returned 200:
	ok
	I0930 20:45:17.668690   48795 api_server.go:141] control plane version: v1.24.4
	I0930 20:45:17.668720   48795 api_server.go:131] duration metric: took 5.5141447s to wait for apiserver health ...
	I0930 20:45:17.668731   48795 cni.go:84] Creating CNI manager for ""
	I0930 20:45:17.668745   48795 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:45:17.670720   48795 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 20:45:17.672071   48795 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 20:45:17.682479   48795 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 20:45:17.702277   48795 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:45:17.702384   48795 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 20:45:17.702410   48795 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 20:45:17.720872   48795 system_pods.go:59] 8 kube-system pods found
	I0930 20:45:17.720908   48795 system_pods.go:61] "coredns-6d4b75cb6d-48ghg" [8510b89b-b628-42c0-873b-02b30380b8a9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 20:45:17.720916   48795 system_pods.go:61] "coredns-6d4b75cb6d-m4hsb" [ee0c16a5-903b-46f3-a4eb-28d9f9a0c6c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 20:45:17.720924   48795 system_pods.go:61] "etcd-test-preload-409125" [9b6e7ee6-bb3c-42d2-ad5f-8cbb784d6a38] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 20:45:17.720929   48795 system_pods.go:61] "kube-apiserver-test-preload-409125" [3b9e5866-b41c-40ba-9e74-4cfbd03932ec] Running
	I0930 20:45:17.720933   48795 system_pods.go:61] "kube-controller-manager-test-preload-409125" [7bd15f25-7020-4713-b558-0d2f3a717bff] Running
	I0930 20:45:17.720938   48795 system_pods.go:61] "kube-proxy-2j7wm" [8fbbe95d-5df0-45af-99b7-cecdba184b51] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 20:45:17.720941   48795 system_pods.go:61] "kube-scheduler-test-preload-409125" [301463f3-dbca-413b-b7a0-4721b31abb10] Running
	I0930 20:45:17.720946   48795 system_pods.go:61] "storage-provisioner" [ced6326a-cf84-46de-bd8d-1f856a2d4863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 20:45:17.720954   48795 system_pods.go:74] duration metric: took 18.635006ms to wait for pod list to return data ...
	I0930 20:45:17.720960   48795 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:45:17.727290   48795 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:45:17.727325   48795 node_conditions.go:123] node cpu capacity is 2
	I0930 20:45:17.727339   48795 node_conditions.go:105] duration metric: took 6.374311ms to run NodePressure ...
	I0930 20:45:17.727359   48795 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:45:17.916490   48795 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 20:45:17.921959   48795 kubeadm.go:739] kubelet initialised
	I0930 20:45:17.921991   48795 kubeadm.go:740] duration metric: took 5.459177ms waiting for restarted kubelet to initialise ...
	I0930 20:45:17.922001   48795 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:45:17.932070   48795 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-48ghg" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:17.939143   48795 pod_ready.go:98] node "test-preload-409125" hosting pod "coredns-6d4b75cb6d-48ghg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:17.939177   48795 pod_ready.go:82] duration metric: took 7.071912ms for pod "coredns-6d4b75cb6d-48ghg" in "kube-system" namespace to be "Ready" ...
	E0930 20:45:17.939190   48795 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-409125" hosting pod "coredns-6d4b75cb6d-48ghg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:17.939197   48795 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-m4hsb" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:17.940939   48795 pod_ready.go:98] error getting pod "coredns-6d4b75cb6d-m4hsb" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-m4hsb" not found
	I0930 20:45:17.940962   48795 pod_ready.go:82] duration metric: took 1.742666ms for pod "coredns-6d4b75cb6d-m4hsb" in "kube-system" namespace to be "Ready" ...
	E0930 20:45:17.940974   48795 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-m4hsb" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-m4hsb" not found
	I0930 20:45:17.940982   48795 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:17.945334   48795 pod_ready.go:98] node "test-preload-409125" hosting pod "etcd-test-preload-409125" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:17.945356   48795 pod_ready.go:82] duration metric: took 4.365984ms for pod "etcd-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	E0930 20:45:17.945375   48795 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-409125" hosting pod "etcd-test-preload-409125" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:17.945383   48795 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:17.950006   48795 pod_ready.go:98] node "test-preload-409125" hosting pod "kube-apiserver-test-preload-409125" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:17.950036   48795 pod_ready.go:82] duration metric: took 4.642819ms for pod "kube-apiserver-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	E0930 20:45:17.950044   48795 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-409125" hosting pod "kube-apiserver-test-preload-409125" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:17.950052   48795 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:18.307432   48795 pod_ready.go:98] node "test-preload-409125" hosting pod "kube-controller-manager-test-preload-409125" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:18.307466   48795 pod_ready.go:82] duration metric: took 357.402537ms for pod "kube-controller-manager-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	E0930 20:45:18.307479   48795 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-409125" hosting pod "kube-controller-manager-test-preload-409125" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:18.307489   48795 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2j7wm" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:18.706500   48795 pod_ready.go:98] node "test-preload-409125" hosting pod "kube-proxy-2j7wm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:18.706527   48795 pod_ready.go:82] duration metric: took 399.028286ms for pod "kube-proxy-2j7wm" in "kube-system" namespace to be "Ready" ...
	E0930 20:45:18.706535   48795 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-409125" hosting pod "kube-proxy-2j7wm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:18.706548   48795 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:19.105909   48795 pod_ready.go:98] node "test-preload-409125" hosting pod "kube-scheduler-test-preload-409125" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:19.105938   48795 pod_ready.go:82] duration metric: took 399.383404ms for pod "kube-scheduler-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	E0930 20:45:19.105948   48795 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-409125" hosting pod "kube-scheduler-test-preload-409125" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:19.105958   48795 pod_ready.go:39] duration metric: took 1.183944653s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:45:19.105980   48795 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 20:45:19.117900   48795 ops.go:34] apiserver oom_adj: -16
	I0930 20:45:19.117926   48795 kubeadm.go:597] duration metric: took 9.276051001s to restartPrimaryControlPlane
	I0930 20:45:19.117937   48795 kubeadm.go:394] duration metric: took 9.333190077s to StartCluster
	I0930 20:45:19.117957   48795 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:45:19.118039   48795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:45:19.118676   48795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:45:19.118888   48795 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:45:19.119000   48795 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 20:45:19.119111   48795 addons.go:69] Setting storage-provisioner=true in profile "test-preload-409125"
	I0930 20:45:19.119127   48795 addons.go:69] Setting default-storageclass=true in profile "test-preload-409125"
	I0930 20:45:19.119145   48795 addons.go:234] Setting addon storage-provisioner=true in "test-preload-409125"
	W0930 20:45:19.119206   48795 addons.go:243] addon storage-provisioner should already be in state true
	I0930 20:45:19.119235   48795 host.go:66] Checking if "test-preload-409125" exists ...
	I0930 20:45:19.119158   48795 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-409125"
	I0930 20:45:19.119124   48795 config.go:182] Loaded profile config "test-preload-409125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0930 20:45:19.119806   48795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:45:19.119830   48795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:45:19.119858   48795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:45:19.119937   48795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:45:19.120748   48795 out.go:177] * Verifying Kubernetes components...
	I0930 20:45:19.122091   48795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:45:19.135147   48795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44365
	I0930 20:45:19.135599   48795 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:45:19.136052   48795 main.go:141] libmachine: Using API Version  1
	I0930 20:45:19.136072   48795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:45:19.136409   48795 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:45:19.136605   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetState
	I0930 20:45:19.139054   48795 kapi.go:59] client config for test-preload-409125: &rest.Config{Host:"https://192.168.39.127:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/profiles/test-preload-409125/client.key", CAFile:"/home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 20:45:19.139382   48795 addons.go:234] Setting addon default-storageclass=true in "test-preload-409125"
	W0930 20:45:19.139398   48795 addons.go:243] addon default-storageclass should already be in state true
	I0930 20:45:19.139424   48795 host.go:66] Checking if "test-preload-409125" exists ...
	I0930 20:45:19.139747   48795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:45:19.139793   48795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:45:19.139880   48795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0930 20:45:19.140392   48795 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:45:19.140856   48795 main.go:141] libmachine: Using API Version  1
	I0930 20:45:19.140881   48795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:45:19.141197   48795 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:45:19.141785   48795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:45:19.141827   48795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:45:19.154731   48795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0930 20:45:19.155185   48795 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:45:19.155818   48795 main.go:141] libmachine: Using API Version  1
	I0930 20:45:19.155848   48795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:45:19.156194   48795 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:45:19.156432   48795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0930 20:45:19.156854   48795 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:45:19.156857   48795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:45:19.156961   48795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:45:19.157300   48795 main.go:141] libmachine: Using API Version  1
	I0930 20:45:19.157322   48795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:45:19.157624   48795 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:45:19.157767   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetState
	I0930 20:45:19.159169   48795 main.go:141] libmachine: (test-preload-409125) Calling .DriverName
	I0930 20:45:19.161355   48795 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:45:19.162844   48795 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 20:45:19.162860   48795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 20:45:19.162873   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHHostname
	I0930 20:45:19.166202   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:45:19.166651   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:45:19.166679   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:45:19.166843   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHPort
	I0930 20:45:19.167115   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:45:19.167254   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHUsername
	I0930 20:45:19.167402   48795 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/test-preload-409125/id_rsa Username:docker}
	I0930 20:45:19.198885   48795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41149
	I0930 20:45:19.199373   48795 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:45:19.199907   48795 main.go:141] libmachine: Using API Version  1
	I0930 20:45:19.199924   48795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:45:19.200228   48795 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:45:19.200408   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetState
	I0930 20:45:19.201927   48795 main.go:141] libmachine: (test-preload-409125) Calling .DriverName
	I0930 20:45:19.202127   48795 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 20:45:19.202143   48795 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 20:45:19.202163   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHHostname
	I0930 20:45:19.205256   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:45:19.205764   48795 main.go:141] libmachine: (test-preload-409125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:ed:69", ip: ""} in network mk-test-preload-409125: {Iface:virbr1 ExpiryTime:2024-09-30 21:44:46 +0000 UTC Type:0 Mac:52:54:00:3f:ed:69 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-409125 Clientid:01:52:54:00:3f:ed:69}
	I0930 20:45:19.205795   48795 main.go:141] libmachine: (test-preload-409125) DBG | domain test-preload-409125 has defined IP address 192.168.39.127 and MAC address 52:54:00:3f:ed:69 in network mk-test-preload-409125
	I0930 20:45:19.205952   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHPort
	I0930 20:45:19.206150   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHKeyPath
	I0930 20:45:19.206333   48795 main.go:141] libmachine: (test-preload-409125) Calling .GetSSHUsername
	I0930 20:45:19.206466   48795 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/test-preload-409125/id_rsa Username:docker}
	I0930 20:45:19.306120   48795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:45:19.324391   48795 node_ready.go:35] waiting up to 6m0s for node "test-preload-409125" to be "Ready" ...
	I0930 20:45:19.380483   48795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 20:45:19.418245   48795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 20:45:20.308918   48795 main.go:141] libmachine: Making call to close driver server
	I0930 20:45:20.308944   48795 main.go:141] libmachine: (test-preload-409125) Calling .Close
	I0930 20:45:20.308965   48795 main.go:141] libmachine: Making call to close driver server
	I0930 20:45:20.308982   48795 main.go:141] libmachine: (test-preload-409125) Calling .Close
	I0930 20:45:20.309244   48795 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:45:20.309265   48795 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:45:20.309269   48795 main.go:141] libmachine: (test-preload-409125) DBG | Closing plugin on server side
	I0930 20:45:20.309274   48795 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:45:20.309283   48795 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:45:20.309285   48795 main.go:141] libmachine: Making call to close driver server
	I0930 20:45:20.309294   48795 main.go:141] libmachine: Making call to close driver server
	I0930 20:45:20.309325   48795 main.go:141] libmachine: (test-preload-409125) Calling .Close
	I0930 20:45:20.309347   48795 main.go:141] libmachine: (test-preload-409125) Calling .Close
	I0930 20:45:20.309574   48795 main.go:141] libmachine: (test-preload-409125) DBG | Closing plugin on server side
	I0930 20:45:20.309615   48795 main.go:141] libmachine: (test-preload-409125) DBG | Closing plugin on server side
	I0930 20:45:20.309629   48795 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:45:20.309654   48795 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:45:20.309635   48795 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:45:20.309691   48795 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:45:20.315858   48795 main.go:141] libmachine: Making call to close driver server
	I0930 20:45:20.315878   48795 main.go:141] libmachine: (test-preload-409125) Calling .Close
	I0930 20:45:20.316191   48795 main.go:141] libmachine: Successfully made call to close driver server
	I0930 20:45:20.316206   48795 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 20:45:20.316228   48795 main.go:141] libmachine: (test-preload-409125) DBG | Closing plugin on server side
	I0930 20:45:20.318252   48795 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0930 20:45:20.319441   48795 addons.go:510] duration metric: took 1.200451113s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0930 20:45:21.328018   48795 node_ready.go:53] node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:23.328585   48795 node_ready.go:53] node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:25.828978   48795 node_ready.go:53] node "test-preload-409125" has status "Ready":"False"
	I0930 20:45:26.828090   48795 node_ready.go:49] node "test-preload-409125" has status "Ready":"True"
	I0930 20:45:26.828113   48795 node_ready.go:38] duration metric: took 7.503685661s for node "test-preload-409125" to be "Ready" ...
	I0930 20:45:26.828121   48795 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:45:26.834188   48795 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-48ghg" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:26.839444   48795 pod_ready.go:93] pod "coredns-6d4b75cb6d-48ghg" in "kube-system" namespace has status "Ready":"True"
	I0930 20:45:26.839467   48795 pod_ready.go:82] duration metric: took 5.251719ms for pod "coredns-6d4b75cb6d-48ghg" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:26.839476   48795 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:28.346898   48795 pod_ready.go:93] pod "etcd-test-preload-409125" in "kube-system" namespace has status "Ready":"True"
	I0930 20:45:28.346924   48795 pod_ready.go:82] duration metric: took 1.507440729s for pod "etcd-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:28.346937   48795 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:30.354250   48795 pod_ready.go:103] pod "kube-apiserver-test-preload-409125" in "kube-system" namespace has status "Ready":"False"
	I0930 20:45:31.854701   48795 pod_ready.go:93] pod "kube-apiserver-test-preload-409125" in "kube-system" namespace has status "Ready":"True"
	I0930 20:45:31.854725   48795 pod_ready.go:82] duration metric: took 3.507781044s for pod "kube-apiserver-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:31.854734   48795 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:31.859895   48795 pod_ready.go:93] pod "kube-controller-manager-test-preload-409125" in "kube-system" namespace has status "Ready":"True"
	I0930 20:45:31.859921   48795 pod_ready.go:82] duration metric: took 5.179634ms for pod "kube-controller-manager-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:31.859933   48795 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2j7wm" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:31.865706   48795 pod_ready.go:93] pod "kube-proxy-2j7wm" in "kube-system" namespace has status "Ready":"True"
	I0930 20:45:31.865726   48795 pod_ready.go:82] duration metric: took 5.786237ms for pod "kube-proxy-2j7wm" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:31.865734   48795 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:31.871456   48795 pod_ready.go:93] pod "kube-scheduler-test-preload-409125" in "kube-system" namespace has status "Ready":"True"
	I0930 20:45:31.871475   48795 pod_ready.go:82] duration metric: took 5.73586ms for pod "kube-scheduler-test-preload-409125" in "kube-system" namespace to be "Ready" ...
	I0930 20:45:31.871483   48795 pod_ready.go:39] duration metric: took 5.043353911s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:45:31.871496   48795 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:45:31.871558   48795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:45:31.888772   48795 api_server.go:72] duration metric: took 12.769854722s to wait for apiserver process to appear ...
	I0930 20:45:31.888805   48795 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:45:31.888826   48795 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I0930 20:45:31.895202   48795 api_server.go:279] https://192.168.39.127:8443/healthz returned 200:
	ok
	I0930 20:45:31.896774   48795 api_server.go:141] control plane version: v1.24.4
	I0930 20:45:31.896805   48795 api_server.go:131] duration metric: took 7.993301ms to wait for apiserver health ...
	I0930 20:45:31.896881   48795 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:45:31.904793   48795 system_pods.go:59] 7 kube-system pods found
	I0930 20:45:31.904832   48795 system_pods.go:61] "coredns-6d4b75cb6d-48ghg" [8510b89b-b628-42c0-873b-02b30380b8a9] Running
	I0930 20:45:31.904839   48795 system_pods.go:61] "etcd-test-preload-409125" [9b6e7ee6-bb3c-42d2-ad5f-8cbb784d6a38] Running
	I0930 20:45:31.904845   48795 system_pods.go:61] "kube-apiserver-test-preload-409125" [3b9e5866-b41c-40ba-9e74-4cfbd03932ec] Running
	I0930 20:45:31.904860   48795 system_pods.go:61] "kube-controller-manager-test-preload-409125" [7bd15f25-7020-4713-b558-0d2f3a717bff] Running
	I0930 20:45:31.904865   48795 system_pods.go:61] "kube-proxy-2j7wm" [8fbbe95d-5df0-45af-99b7-cecdba184b51] Running
	I0930 20:45:31.904870   48795 system_pods.go:61] "kube-scheduler-test-preload-409125" [301463f3-dbca-413b-b7a0-4721b31abb10] Running
	I0930 20:45:31.904874   48795 system_pods.go:61] "storage-provisioner" [ced6326a-cf84-46de-bd8d-1f856a2d4863] Running
	I0930 20:45:31.904882   48795 system_pods.go:74] duration metric: took 7.985205ms to wait for pod list to return data ...
	I0930 20:45:31.904891   48795 default_sa.go:34] waiting for default service account to be created ...
	I0930 20:45:32.028277   48795 default_sa.go:45] found service account: "default"
	I0930 20:45:32.028312   48795 default_sa.go:55] duration metric: took 123.412508ms for default service account to be created ...
	I0930 20:45:32.028323   48795 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 20:45:32.231269   48795 system_pods.go:86] 7 kube-system pods found
	I0930 20:45:32.231303   48795 system_pods.go:89] "coredns-6d4b75cb6d-48ghg" [8510b89b-b628-42c0-873b-02b30380b8a9] Running
	I0930 20:45:32.231308   48795 system_pods.go:89] "etcd-test-preload-409125" [9b6e7ee6-bb3c-42d2-ad5f-8cbb784d6a38] Running
	I0930 20:45:32.231313   48795 system_pods.go:89] "kube-apiserver-test-preload-409125" [3b9e5866-b41c-40ba-9e74-4cfbd03932ec] Running
	I0930 20:45:32.231316   48795 system_pods.go:89] "kube-controller-manager-test-preload-409125" [7bd15f25-7020-4713-b558-0d2f3a717bff] Running
	I0930 20:45:32.231319   48795 system_pods.go:89] "kube-proxy-2j7wm" [8fbbe95d-5df0-45af-99b7-cecdba184b51] Running
	I0930 20:45:32.231322   48795 system_pods.go:89] "kube-scheduler-test-preload-409125" [301463f3-dbca-413b-b7a0-4721b31abb10] Running
	I0930 20:45:32.231325   48795 system_pods.go:89] "storage-provisioner" [ced6326a-cf84-46de-bd8d-1f856a2d4863] Running
	I0930 20:45:32.231331   48795 system_pods.go:126] duration metric: took 203.002673ms to wait for k8s-apps to be running ...
	I0930 20:45:32.231338   48795 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 20:45:32.231388   48795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:45:32.248658   48795 system_svc.go:56] duration metric: took 17.308444ms WaitForService to wait for kubelet
	I0930 20:45:32.248704   48795 kubeadm.go:582] duration metric: took 13.129782951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:45:32.248724   48795 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:45:32.428555   48795 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:45:32.428586   48795 node_conditions.go:123] node cpu capacity is 2
	I0930 20:45:32.428596   48795 node_conditions.go:105] duration metric: took 179.867512ms to run NodePressure ...
	I0930 20:45:32.428608   48795 start.go:241] waiting for startup goroutines ...
	I0930 20:45:32.428615   48795 start.go:246] waiting for cluster config update ...
	I0930 20:45:32.428629   48795 start.go:255] writing updated cluster config ...
	I0930 20:45:32.428906   48795 ssh_runner.go:195] Run: rm -f paused
	I0930 20:45:32.476966   48795 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I0930 20:45:32.478979   48795 out.go:201] 
	W0930 20:45:32.480418   48795 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0930 20:45:32.481766   48795 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0930 20:45:32.483161   48795 out.go:177] * Done! kubectl is now configured to use "test-preload-409125" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.373427614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729133373404499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7c7eb22-8a44-4890-9ee9-355e24586701 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.374131906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=118ac971-54e1-459c-b28e-82c4480a2429 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.374183469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=118ac971-54e1-459c-b28e-82c4480a2429 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.374359853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8b0e2bec186a52edd3254ca1059d4b2c392d340e40c430473539ee10f550c66,PodSandboxId:be85aef766baf6607bf46d2da986866ddeacebc7aeda2d348d569223de06397b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727729124430829190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-48ghg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8510b89b-b628-42c0-873b-02b30380b8a9,},Annotations:map[string]string{io.kubernetes.container.hash: ff65e054,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb2990fdcb77a2447549e74e378f18b10a2a39cabc5f172b347115ebe23ca659,PodSandboxId:a926497de4be035028d14586709d948c5494f483f0ced1ec4659092cb1319907,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727729117445979970,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2j7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8fbbe95d-5df0-45af-99b7-cecdba184b51,},Annotations:map[string]string{io.kubernetes.container.hash: 7c5100f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd297909dc982acaa4354b441d1c39c492980e0b8efea892d604446c0e63fe29,PodSandboxId:cfdf78be1c7d9fa311e05aea0f505035a9ca290061f9a70f689db69e2e1565be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727729117186440519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce
d6326a-cf84-46de-bd8d-1f856a2d4863,},Annotations:map[string]string{io.kubernetes.container.hash: 70f4351d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c74d26144849a5157e62423dcf8798032a32829116cfecb37a1df26b48785c8,PodSandboxId:bb8b1c88dec15f293e69f1f0d9e15b043a846a01e99da39d5b47a28a8ab7441d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727729111884827799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76cf858b3
66423ac33c576f8040754d7,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c209d5e1f92bc3c4d85c71f203d7952310297527bb8a83f21a08e1b285d6df81,PodSandboxId:b29e0e213ea67ee032001f520938a7c542b0c080c7f6e484742d78c19b7b646e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727729111893236047,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c15e60516c02425847374b90f92aa88,},Annotations:map
[string]string{io.kubernetes.container.hash: 63e664d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c30ba1f05cef40e915fad0686851775f4c97de38431fdb82320a6f1d79d845,PodSandboxId:64f803f005b18d3d91ce10a602003529e5c64d2023bf72ce0ebc15b16163ef03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727729111815066951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0d3e1846f751a4e6483bb82ec65855,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 406692b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eabaf959798ca40a7cbd628279c0f15fb250c6415790f5eaecf91a50963439,PodSandboxId:ace90cd934bdb0c494bf71e341cf35e87cf4c20e9d115c72326171099f65997d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727729111831484220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ad85da3d17cb5cb9621e8fb3506a001,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=118ac971-54e1-459c-b28e-82c4480a2429 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.412011743Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38e1b253-8f97-4b0a-92c5-abc0068d5abc name=/runtime.v1.RuntimeService/Version
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.412089006Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38e1b253-8f97-4b0a-92c5-abc0068d5abc name=/runtime.v1.RuntimeService/Version
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.413407959Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65c48c9c-9307-4774-ad95-c874ec418622 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.414601371Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729133414459719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65c48c9c-9307-4774-ad95-c874ec418622 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.417694220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12ff5ad1-7aa3-45f1-bb3b-0c1b2f3a7860 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.417768179Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12ff5ad1-7aa3-45f1-bb3b-0c1b2f3a7860 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.417997266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8b0e2bec186a52edd3254ca1059d4b2c392d340e40c430473539ee10f550c66,PodSandboxId:be85aef766baf6607bf46d2da986866ddeacebc7aeda2d348d569223de06397b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727729124430829190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-48ghg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8510b89b-b628-42c0-873b-02b30380b8a9,},Annotations:map[string]string{io.kubernetes.container.hash: ff65e054,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb2990fdcb77a2447549e74e378f18b10a2a39cabc5f172b347115ebe23ca659,PodSandboxId:a926497de4be035028d14586709d948c5494f483f0ced1ec4659092cb1319907,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727729117445979970,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2j7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8fbbe95d-5df0-45af-99b7-cecdba184b51,},Annotations:map[string]string{io.kubernetes.container.hash: 7c5100f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd297909dc982acaa4354b441d1c39c492980e0b8efea892d604446c0e63fe29,PodSandboxId:cfdf78be1c7d9fa311e05aea0f505035a9ca290061f9a70f689db69e2e1565be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727729117186440519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce
d6326a-cf84-46de-bd8d-1f856a2d4863,},Annotations:map[string]string{io.kubernetes.container.hash: 70f4351d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c74d26144849a5157e62423dcf8798032a32829116cfecb37a1df26b48785c8,PodSandboxId:bb8b1c88dec15f293e69f1f0d9e15b043a846a01e99da39d5b47a28a8ab7441d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727729111884827799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76cf858b3
66423ac33c576f8040754d7,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c209d5e1f92bc3c4d85c71f203d7952310297527bb8a83f21a08e1b285d6df81,PodSandboxId:b29e0e213ea67ee032001f520938a7c542b0c080c7f6e484742d78c19b7b646e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727729111893236047,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c15e60516c02425847374b90f92aa88,},Annotations:map
[string]string{io.kubernetes.container.hash: 63e664d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c30ba1f05cef40e915fad0686851775f4c97de38431fdb82320a6f1d79d845,PodSandboxId:64f803f005b18d3d91ce10a602003529e5c64d2023bf72ce0ebc15b16163ef03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727729111815066951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0d3e1846f751a4e6483bb82ec65855,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 406692b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eabaf959798ca40a7cbd628279c0f15fb250c6415790f5eaecf91a50963439,PodSandboxId:ace90cd934bdb0c494bf71e341cf35e87cf4c20e9d115c72326171099f65997d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727729111831484220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ad85da3d17cb5cb9621e8fb3506a001,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12ff5ad1-7aa3-45f1-bb3b-0c1b2f3a7860 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.454569346Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=785323b1-9b61-44d5-b0da-bbac4e162975 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.454756151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=785323b1-9b61-44d5-b0da-bbac4e162975 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.456463444Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73397d74-8bac-4134-b05c-6674026a948f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.457300353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729133457269539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73397d74-8bac-4134-b05c-6674026a948f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.458137543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72bc4bff-7f87-47a5-ac11-b1f9da23d84f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.458203090Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72bc4bff-7f87-47a5-ac11-b1f9da23d84f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.458365533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8b0e2bec186a52edd3254ca1059d4b2c392d340e40c430473539ee10f550c66,PodSandboxId:be85aef766baf6607bf46d2da986866ddeacebc7aeda2d348d569223de06397b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727729124430829190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-48ghg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8510b89b-b628-42c0-873b-02b30380b8a9,},Annotations:map[string]string{io.kubernetes.container.hash: ff65e054,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb2990fdcb77a2447549e74e378f18b10a2a39cabc5f172b347115ebe23ca659,PodSandboxId:a926497de4be035028d14586709d948c5494f483f0ced1ec4659092cb1319907,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727729117445979970,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2j7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8fbbe95d-5df0-45af-99b7-cecdba184b51,},Annotations:map[string]string{io.kubernetes.container.hash: 7c5100f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd297909dc982acaa4354b441d1c39c492980e0b8efea892d604446c0e63fe29,PodSandboxId:cfdf78be1c7d9fa311e05aea0f505035a9ca290061f9a70f689db69e2e1565be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727729117186440519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce
d6326a-cf84-46de-bd8d-1f856a2d4863,},Annotations:map[string]string{io.kubernetes.container.hash: 70f4351d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c74d26144849a5157e62423dcf8798032a32829116cfecb37a1df26b48785c8,PodSandboxId:bb8b1c88dec15f293e69f1f0d9e15b043a846a01e99da39d5b47a28a8ab7441d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727729111884827799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76cf858b3
66423ac33c576f8040754d7,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c209d5e1f92bc3c4d85c71f203d7952310297527bb8a83f21a08e1b285d6df81,PodSandboxId:b29e0e213ea67ee032001f520938a7c542b0c080c7f6e484742d78c19b7b646e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727729111893236047,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c15e60516c02425847374b90f92aa88,},Annotations:map
[string]string{io.kubernetes.container.hash: 63e664d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c30ba1f05cef40e915fad0686851775f4c97de38431fdb82320a6f1d79d845,PodSandboxId:64f803f005b18d3d91ce10a602003529e5c64d2023bf72ce0ebc15b16163ef03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727729111815066951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0d3e1846f751a4e6483bb82ec65855,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 406692b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eabaf959798ca40a7cbd628279c0f15fb250c6415790f5eaecf91a50963439,PodSandboxId:ace90cd934bdb0c494bf71e341cf35e87cf4c20e9d115c72326171099f65997d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727729111831484220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ad85da3d17cb5cb9621e8fb3506a001,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72bc4bff-7f87-47a5-ac11-b1f9da23d84f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.493133206Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00a9bca8-a722-4bc8-9bbf-83c39f2c72d1 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.493214455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00a9bca8-a722-4bc8-9bbf-83c39f2c72d1 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.494250862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f11b9ad6-a83a-4617-9c35-d529d138129e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.494732905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729133494706806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f11b9ad6-a83a-4617-9c35-d529d138129e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.495374904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1be66fa-1289-46af-870b-c490a020503b name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.495426196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1be66fa-1289-46af-870b-c490a020503b name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:45:33 test-preload-409125 crio[671]: time="2024-09-30 20:45:33.495570740Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8b0e2bec186a52edd3254ca1059d4b2c392d340e40c430473539ee10f550c66,PodSandboxId:be85aef766baf6607bf46d2da986866ddeacebc7aeda2d348d569223de06397b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727729124430829190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-48ghg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8510b89b-b628-42c0-873b-02b30380b8a9,},Annotations:map[string]string{io.kubernetes.container.hash: ff65e054,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb2990fdcb77a2447549e74e378f18b10a2a39cabc5f172b347115ebe23ca659,PodSandboxId:a926497de4be035028d14586709d948c5494f483f0ced1ec4659092cb1319907,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727729117445979970,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2j7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8fbbe95d-5df0-45af-99b7-cecdba184b51,},Annotations:map[string]string{io.kubernetes.container.hash: 7c5100f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd297909dc982acaa4354b441d1c39c492980e0b8efea892d604446c0e63fe29,PodSandboxId:cfdf78be1c7d9fa311e05aea0f505035a9ca290061f9a70f689db69e2e1565be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727729117186440519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce
d6326a-cf84-46de-bd8d-1f856a2d4863,},Annotations:map[string]string{io.kubernetes.container.hash: 70f4351d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c74d26144849a5157e62423dcf8798032a32829116cfecb37a1df26b48785c8,PodSandboxId:bb8b1c88dec15f293e69f1f0d9e15b043a846a01e99da39d5b47a28a8ab7441d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727729111884827799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76cf858b3
66423ac33c576f8040754d7,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c209d5e1f92bc3c4d85c71f203d7952310297527bb8a83f21a08e1b285d6df81,PodSandboxId:b29e0e213ea67ee032001f520938a7c542b0c080c7f6e484742d78c19b7b646e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727729111893236047,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c15e60516c02425847374b90f92aa88,},Annotations:map
[string]string{io.kubernetes.container.hash: 63e664d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c30ba1f05cef40e915fad0686851775f4c97de38431fdb82320a6f1d79d845,PodSandboxId:64f803f005b18d3d91ce10a602003529e5c64d2023bf72ce0ebc15b16163ef03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727729111815066951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0d3e1846f751a4e6483bb82ec65855,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 406692b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eabaf959798ca40a7cbd628279c0f15fb250c6415790f5eaecf91a50963439,PodSandboxId:ace90cd934bdb0c494bf71e341cf35e87cf4c20e9d115c72326171099f65997d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727729111831484220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-409125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ad85da3d17cb5cb9621e8fb3506a001,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1be66fa-1289-46af-870b-c490a020503b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d8b0e2bec186a       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   be85aef766baf       coredns-6d4b75cb6d-48ghg
	bb2990fdcb77a       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   a926497de4be0       kube-proxy-2j7wm
	fd297909dc982       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   cfdf78be1c7d9       storage-provisioner
	c209d5e1f92bc       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   b29e0e213ea67       etcd-test-preload-409125
	3c74d26144849       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   bb8b1c88dec15       kube-scheduler-test-preload-409125
	66eabaf959798       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   ace90cd934bdb       kube-controller-manager-test-preload-409125
	a4c30ba1f05ce       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   64f803f005b18       kube-apiserver-test-preload-409125
	
	
	==> coredns [d8b0e2bec186a52edd3254ca1059d4b2c392d340e40c430473539ee10f550c66] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:55842 - 26173 "HINFO IN 8753264683285649102.1137791551592939658. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013161136s
	
	
	==> describe nodes <==
	Name:               test-preload-409125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-409125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=test-preload-409125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T20_43_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:43:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-409125
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:45:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:45:26 +0000   Mon, 30 Sep 2024 20:43:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:45:26 +0000   Mon, 30 Sep 2024 20:43:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:45:26 +0000   Mon, 30 Sep 2024 20:43:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:45:26 +0000   Mon, 30 Sep 2024 20:45:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    test-preload-409125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 929300706d11467eadb39e61f1cc1064
	  System UUID:                92930070-6d11-467e-adb3-9e61f1cc1064
	  Boot ID:                    3334f223-ed39-4c84-a24c-114b4a762f68
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-48ghg                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     85s
	  kube-system                 etcd-test-preload-409125                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         98s
	  kube-system                 kube-apiserver-test-preload-409125             250m (12%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-test-preload-409125    200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-2j7wm                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-test-preload-409125             100m (5%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s                kubelet          Node test-preload-409125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                kubelet          Node test-preload-409125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s                kubelet          Node test-preload-409125 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                88s                kubelet          Node test-preload-409125 status is now: NodeReady
	  Normal  RegisteredNode           85s                node-controller  Node test-preload-409125 event: Registered Node test-preload-409125 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-409125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-409125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-409125 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-409125 event: Registered Node test-preload-409125 in Controller
	
	
	==> dmesg <==
	[Sep30 20:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051671] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037566] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.785443] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.940626] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.568279] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.747344] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.062491] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063166] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.206318] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.117440] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.287571] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Sep30 20:45] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[  +0.060564] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.648811] systemd-fstab-generator[1122]: Ignoring "noauto" option for root device
	[  +5.974269] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.299348] systemd-fstab-generator[1771]: Ignoring "noauto" option for root device
	[  +5.054937] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [c209d5e1f92bc3c4d85c71f203d7952310297527bb8a83f21a08e1b285d6df81] <==
	{"level":"info","ts":"2024-09-30T20:45:12.192Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"9dc5e8b969e9632c","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-30T20:45:12.195Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-30T20:45:12.195Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-30T20:45:12.197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c switched to configuration voters=(11368748717410181932)"}
	{"level":"info","ts":"2024-09-30T20:45:12.201Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","added-peer-id":"9dc5e8b969e9632c","added-peer-peer-urls":["https://192.168.39.127:2380"]}
	{"level":"info","ts":"2024-09-30T20:45:12.201Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T20:45:12.202Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T20:45:12.202Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9dc5e8b969e9632c","initial-advertise-peer-urls":["https://192.168.39.127:2380"],"listen-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.127:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-30T20:45:12.202Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-30T20:45:12.197Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-09-30T20:45:12.206Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-09-30T20:45:13.670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T20:45:13.670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T20:45:13.670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c received MsgPreVoteResp from 9dc5e8b969e9632c at term 2"}
	{"level":"info","ts":"2024-09-30T20:45:13.670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became candidate at term 3"}
	{"level":"info","ts":"2024-09-30T20:45:13.670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c received MsgVoteResp from 9dc5e8b969e9632c at term 3"}
	{"level":"info","ts":"2024-09-30T20:45:13.670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became leader at term 3"}
	{"level":"info","ts":"2024-09-30T20:45:13.670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9dc5e8b969e9632c elected leader 9dc5e8b969e9632c at term 3"}
	{"level":"info","ts":"2024-09-30T20:45:13.670Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9dc5e8b969e9632c","local-member-attributes":"{Name:test-preload-409125 ClientURLs:[https://192.168.39.127:2379]}","request-path":"/0/members/9dc5e8b969e9632c/attributes","cluster-id":"367c7cb0db09c3ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T20:45:13.670Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:45:13.672Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T20:45:13.672Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:45:13.673Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.127:2379"}
	{"level":"info","ts":"2024-09-30T20:45:13.679Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T20:45:13.679Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:45:33 up 0 min,  0 users,  load average: 1.07, 0.29, 0.10
	Linux test-preload-409125 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a4c30ba1f05cef40e915fad0686851775f4c97de38431fdb82320a6f1d79d845] <==
	I0930 20:45:16.017109       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0930 20:45:16.017257       1 apf_controller.go:317] Starting API Priority and Fairness config controller
	I0930 20:45:16.017785       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0930 20:45:16.017814       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0930 20:45:16.074580       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0930 20:45:16.074609       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0930 20:45:16.150786       1 shared_informer.go:262] Caches are synced for node_authorizer
	E0930 20:45:16.151190       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0930 20:45:16.155976       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0930 20:45:16.156410       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0930 20:45:16.175836       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0930 20:45:16.223734       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0930 20:45:16.223775       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 20:45:16.223806       1 cache.go:39] Caches are synced for autoregister controller
	I0930 20:45:16.235512       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 20:45:16.717022       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0930 20:45:17.028943       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0930 20:45:17.691070       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0930 20:45:17.824302       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0930 20:45:17.839341       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0930 20:45:17.880166       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0930 20:45:17.895598       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0930 20:45:17.902422       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0930 20:45:28.497439       1 controller.go:611] quota admission added evaluator for: endpoints
	I0930 20:45:28.511811       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [66eabaf959798ca40a7cbd628279c0f15fb250c6415790f5eaecf91a50963439] <==
	I0930 20:45:28.510081       1 shared_informer.go:262] Caches are synced for job
	I0930 20:45:28.521892       1 shared_informer.go:262] Caches are synced for node
	I0930 20:45:28.521939       1 range_allocator.go:173] Starting range CIDR allocator
	I0930 20:45:28.521944       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0930 20:45:28.521957       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0930 20:45:28.532929       1 shared_informer.go:262] Caches are synced for namespace
	I0930 20:45:28.534178       1 shared_informer.go:262] Caches are synced for attach detach
	I0930 20:45:28.534334       1 shared_informer.go:262] Caches are synced for TTL
	I0930 20:45:28.534411       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0930 20:45:28.536948       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0930 20:45:28.540519       1 shared_informer.go:262] Caches are synced for crt configmap
	I0930 20:45:28.543734       1 shared_informer.go:262] Caches are synced for service account
	I0930 20:45:28.604919       1 shared_informer.go:262] Caches are synced for taint
	I0930 20:45:28.605152       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0930 20:45:28.605226       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0930 20:45:28.605400       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-409125. Assuming now as a timestamp.
	I0930 20:45:28.605464       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0930 20:45:28.605631       1 event.go:294] "Event occurred" object="test-preload-409125" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-409125 event: Registered Node test-preload-409125 in Controller"
	I0930 20:45:28.666367       1 shared_informer.go:262] Caches are synced for resource quota
	I0930 20:45:28.677345       1 shared_informer.go:262] Caches are synced for disruption
	I0930 20:45:28.677422       1 disruption.go:371] Sending events to api server.
	I0930 20:45:28.723773       1 shared_informer.go:262] Caches are synced for resource quota
	I0930 20:45:29.170020       1 shared_informer.go:262] Caches are synced for garbage collector
	I0930 20:45:29.198875       1 shared_informer.go:262] Caches are synced for garbage collector
	I0930 20:45:29.198924       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [bb2990fdcb77a2447549e74e378f18b10a2a39cabc5f172b347115ebe23ca659] <==
	I0930 20:45:17.631035       1 node.go:163] Successfully retrieved node IP: 192.168.39.127
	I0930 20:45:17.631204       1 server_others.go:138] "Detected node IP" address="192.168.39.127"
	I0930 20:45:17.631274       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0930 20:45:17.672272       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0930 20:45:17.672288       1 server_others.go:206] "Using iptables Proxier"
	I0930 20:45:17.673020       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0930 20:45:17.673296       1 server.go:661] "Version info" version="v1.24.4"
	I0930 20:45:17.673305       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:45:17.675479       1 config.go:317] "Starting service config controller"
	I0930 20:45:17.675791       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0930 20:45:17.676169       1 config.go:226] "Starting endpoint slice config controller"
	I0930 20:45:17.676208       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0930 20:45:17.679822       1 config.go:444] "Starting node config controller"
	I0930 20:45:17.679832       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0930 20:45:17.776797       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0930 20:45:17.776810       1 shared_informer.go:262] Caches are synced for service config
	I0930 20:45:17.780306       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [3c74d26144849a5157e62423dcf8798032a32829116cfecb37a1df26b48785c8] <==
	I0930 20:45:12.787466       1 serving.go:348] Generated self-signed cert in-memory
	W0930 20:45:16.088041       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 20:45:16.088306       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 20:45:16.088346       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 20:45:16.088372       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 20:45:16.152432       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0930 20:45:16.152503       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:45:16.159139       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0930 20:45:16.159389       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 20:45:16.159435       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 20:45:16.159490       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0930 20:45:16.259511       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: E0930 20:45:16.178429    1129 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: E0930 20:45:16.182707    1129 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: I0930 20:45:16.192563    1129 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-409125"
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: I0930 20:45:16.192872    1129 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-409125"
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: I0930 20:45:16.196271    1129 setters.go:532] "Node became not ready" node="test-preload-409125" condition={Type:Ready Status:False LastHeartbeatTime:2024-09-30 20:45:16.196163711 +0000 UTC m=+5.211160165 LastTransitionTime:2024-09-30 20:45:16.196163711 +0000 UTC m=+5.211160165 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: I0930 20:45:16.283350    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8fbbe95d-5df0-45af-99b7-cecdba184b51-kube-proxy\") pod \"kube-proxy-2j7wm\" (UID: \"8fbbe95d-5df0-45af-99b7-cecdba184b51\") " pod="kube-system/kube-proxy-2j7wm"
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: I0930 20:45:16.283526    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fbbe95d-5df0-45af-99b7-cecdba184b51-xtables-lock\") pod \"kube-proxy-2j7wm\" (UID: \"8fbbe95d-5df0-45af-99b7-cecdba184b51\") " pod="kube-system/kube-proxy-2j7wm"
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: I0930 20:45:16.283586    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjjn7\" (UniqueName: \"kubernetes.io/projected/8510b89b-b628-42c0-873b-02b30380b8a9-kube-api-access-jjjn7\") pod \"coredns-6d4b75cb6d-48ghg\" (UID: \"8510b89b-b628-42c0-873b-02b30380b8a9\") " pod="kube-system/coredns-6d4b75cb6d-48ghg"
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: I0930 20:45:16.283623    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzqz2\" (UniqueName: \"kubernetes.io/projected/8fbbe95d-5df0-45af-99b7-cecdba184b51-kube-api-access-xzqz2\") pod \"kube-proxy-2j7wm\" (UID: \"8fbbe95d-5df0-45af-99b7-cecdba184b51\") " pod="kube-system/kube-proxy-2j7wm"
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: I0930 20:45:16.283693    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fbbe95d-5df0-45af-99b7-cecdba184b51-lib-modules\") pod \"kube-proxy-2j7wm\" (UID: \"8fbbe95d-5df0-45af-99b7-cecdba184b51\") " pod="kube-system/kube-proxy-2j7wm"
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: I0930 20:45:16.283715    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ced6326a-cf84-46de-bd8d-1f856a2d4863-tmp\") pod \"storage-provisioner\" (UID: \"ced6326a-cf84-46de-bd8d-1f856a2d4863\") " pod="kube-system/storage-provisioner"
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: I0930 20:45:16.283735    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgdql\" (UniqueName: \"kubernetes.io/projected/ced6326a-cf84-46de-bd8d-1f856a2d4863-kube-api-access-rgdql\") pod \"storage-provisioner\" (UID: \"ced6326a-cf84-46de-bd8d-1f856a2d4863\") " pod="kube-system/storage-provisioner"
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: I0930 20:45:16.283795    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8510b89b-b628-42c0-873b-02b30380b8a9-config-volume\") pod \"coredns-6d4b75cb6d-48ghg\" (UID: \"8510b89b-b628-42c0-873b-02b30380b8a9\") " pod="kube-system/coredns-6d4b75cb6d-48ghg"
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: I0930 20:45:16.283953    1129 reconciler.go:159] "Reconciler: start to sync state"
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: E0930 20:45:16.388906    1129 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: E0930 20:45:16.389046    1129 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8510b89b-b628-42c0-873b-02b30380b8a9-config-volume podName:8510b89b-b628-42c0-873b-02b30380b8a9 nodeName:}" failed. No retries permitted until 2024-09-30 20:45:16.889005023 +0000 UTC m=+5.904001489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8510b89b-b628-42c0-873b-02b30380b8a9-config-volume") pod "coredns-6d4b75cb6d-48ghg" (UID: "8510b89b-b628-42c0-873b-02b30380b8a9") : object "kube-system"/"coredns" not registered
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: E0930 20:45:16.892204    1129 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 30 20:45:16 test-preload-409125 kubelet[1129]: E0930 20:45:16.892303    1129 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8510b89b-b628-42c0-873b-02b30380b8a9-config-volume podName:8510b89b-b628-42c0-873b-02b30380b8a9 nodeName:}" failed. No retries permitted until 2024-09-30 20:45:17.892288316 +0000 UTC m=+6.907284770 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8510b89b-b628-42c0-873b-02b30380b8a9-config-volume") pod "coredns-6d4b75cb6d-48ghg" (UID: "8510b89b-b628-42c0-873b-02b30380b8a9") : object "kube-system"/"coredns" not registered
	Sep 30 20:45:17 test-preload-409125 kubelet[1129]: E0930 20:45:17.218858    1129 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-48ghg" podUID=8510b89b-b628-42c0-873b-02b30380b8a9
	Sep 30 20:45:17 test-preload-409125 kubelet[1129]: E0930 20:45:17.900916    1129 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 30 20:45:17 test-preload-409125 kubelet[1129]: E0930 20:45:17.900996    1129 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8510b89b-b628-42c0-873b-02b30380b8a9-config-volume podName:8510b89b-b628-42c0-873b-02b30380b8a9 nodeName:}" failed. No retries permitted until 2024-09-30 20:45:19.900981767 +0000 UTC m=+8.915978224 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8510b89b-b628-42c0-873b-02b30380b8a9-config-volume") pod "coredns-6d4b75cb6d-48ghg" (UID: "8510b89b-b628-42c0-873b-02b30380b8a9") : object "kube-system"/"coredns" not registered
	Sep 30 20:45:19 test-preload-409125 kubelet[1129]: E0930 20:45:19.217868    1129 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-48ghg" podUID=8510b89b-b628-42c0-873b-02b30380b8a9
	Sep 30 20:45:19 test-preload-409125 kubelet[1129]: I0930 20:45:19.230410    1129 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ee0c16a5-903b-46f3-a4eb-28d9f9a0c6c6 path="/var/lib/kubelet/pods/ee0c16a5-903b-46f3-a4eb-28d9f9a0c6c6/volumes"
	Sep 30 20:45:19 test-preload-409125 kubelet[1129]: E0930 20:45:19.915630    1129 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 30 20:45:19 test-preload-409125 kubelet[1129]: E0930 20:45:19.915751    1129 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8510b89b-b628-42c0-873b-02b30380b8a9-config-volume podName:8510b89b-b628-42c0-873b-02b30380b8a9 nodeName:}" failed. No retries permitted until 2024-09-30 20:45:23.91573138 +0000 UTC m=+12.930727835 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8510b89b-b628-42c0-873b-02b30380b8a9-config-volume") pod "coredns-6d4b75cb6d-48ghg" (UID: "8510b89b-b628-42c0-873b-02b30380b8a9") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [fd297909dc982acaa4354b441d1c39c492980e0b8efea892d604446c0e63fe29] <==
	I0930 20:45:17.292619       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-409125 -n test-preload-409125
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-409125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-409125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-409125
--- FAIL: TestPreload (172.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (438.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-810093 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-810093 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m36.951834033s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-810093] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-810093" primary control-plane node in "kubernetes-upgrade-810093" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 20:47:36.556719   52674 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:47:36.556928   52674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:47:36.556943   52674 out.go:358] Setting ErrFile to fd 2...
	I0930 20:47:36.556949   52674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:47:36.557193   52674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:47:36.557879   52674 out.go:352] Setting JSON to false
	I0930 20:47:36.558880   52674 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5400,"bootTime":1727723857,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 20:47:36.558985   52674 start.go:139] virtualization: kvm guest
	I0930 20:47:36.561307   52674 out.go:177] * [kubernetes-upgrade-810093] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 20:47:36.562489   52674 notify.go:220] Checking for updates...
	I0930 20:47:36.562497   52674 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 20:47:36.563697   52674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 20:47:36.564741   52674 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:47:36.565765   52674 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:47:36.566766   52674 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 20:47:36.567950   52674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 20:47:36.569713   52674 config.go:182] Loaded profile config "NoKubernetes-592556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:47:36.569884   52674 config.go:182] Loaded profile config "force-systemd-env-618322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:47:36.570032   52674 config.go:182] Loaded profile config "offline-crio-579164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:47:36.570129   52674 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 20:47:36.604620   52674 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 20:47:36.606073   52674 start.go:297] selected driver: kvm2
	I0930 20:47:36.606097   52674 start.go:901] validating driver "kvm2" against <nil>
	I0930 20:47:36.606111   52674 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 20:47:36.606930   52674 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:47:36.607011   52674 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 20:47:36.622852   52674 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 20:47:36.622943   52674 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 20:47:36.623313   52674 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 20:47:36.623350   52674 cni.go:84] Creating CNI manager for ""
	I0930 20:47:36.623405   52674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:47:36.623417   52674 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 20:47:36.623491   52674 start.go:340] cluster config:
	{Name:kubernetes-upgrade-810093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-810093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:47:36.623704   52674 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:47:36.625630   52674 out.go:177] * Starting "kubernetes-upgrade-810093" primary control-plane node in "kubernetes-upgrade-810093" cluster
	I0930 20:47:36.626905   52674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 20:47:36.626960   52674 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 20:47:36.626974   52674 cache.go:56] Caching tarball of preloaded images
	I0930 20:47:36.627121   52674 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:47:36.627139   52674 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0930 20:47:36.627293   52674 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/config.json ...
	I0930 20:47:36.627321   52674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/config.json: {Name:mk72aa65d167c63879ad3d19c5387f5f22fc88f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:47:36.627544   52674 start.go:360] acquireMachinesLock for kubernetes-upgrade-810093: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:48:41.868412   52674 start.go:364] duration metric: took 1m5.240819649s to acquireMachinesLock for "kubernetes-upgrade-810093"
	I0930 20:48:41.868487   52674 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-810093 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-810093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:48:41.868713   52674 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 20:48:41.871279   52674 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 20:48:41.871753   52674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:48:41.871817   52674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:48:41.889608   52674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35649
	I0930 20:48:41.890123   52674 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:48:41.890800   52674 main.go:141] libmachine: Using API Version  1
	I0930 20:48:41.890829   52674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:48:41.891177   52674 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:48:41.891362   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetMachineName
	I0930 20:48:41.891474   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:48:41.891614   52674 start.go:159] libmachine.API.Create for "kubernetes-upgrade-810093" (driver="kvm2")
	I0930 20:48:41.891645   52674 client.go:168] LocalClient.Create starting
	I0930 20:48:41.891677   52674 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 20:48:41.891713   52674 main.go:141] libmachine: Decoding PEM data...
	I0930 20:48:41.891738   52674 main.go:141] libmachine: Parsing certificate...
	I0930 20:48:41.891808   52674 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 20:48:41.891840   52674 main.go:141] libmachine: Decoding PEM data...
	I0930 20:48:41.891859   52674 main.go:141] libmachine: Parsing certificate...
	I0930 20:48:41.891886   52674 main.go:141] libmachine: Running pre-create checks...
	I0930 20:48:41.891898   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .PreCreateCheck
	I0930 20:48:41.892333   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetConfigRaw
	I0930 20:48:41.892828   52674 main.go:141] libmachine: Creating machine...
	I0930 20:48:41.892845   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .Create
	I0930 20:48:41.892979   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Creating KVM machine...
	I0930 20:48:41.894429   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found existing default KVM network
	I0930 20:48:41.895648   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:41.895471   53628 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00018b3e0}
	I0930 20:48:41.895711   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | created network xml: 
	I0930 20:48:41.895734   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | <network>
	I0930 20:48:41.895745   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG |   <name>mk-kubernetes-upgrade-810093</name>
	I0930 20:48:41.895752   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG |   <dns enable='no'/>
	I0930 20:48:41.895760   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG |   
	I0930 20:48:41.895769   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0930 20:48:41.895777   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG |     <dhcp>
	I0930 20:48:41.895790   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0930 20:48:41.895799   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG |     </dhcp>
	I0930 20:48:41.895805   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG |   </ip>
	I0930 20:48:41.895810   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG |   
	I0930 20:48:41.895821   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | </network>
	I0930 20:48:41.895848   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | 
	I0930 20:48:41.902216   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | trying to create private KVM network mk-kubernetes-upgrade-810093 192.168.39.0/24...
	I0930 20:48:41.978007   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093 ...
	I0930 20:48:41.978042   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 20:48:41.978052   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | private KVM network mk-kubernetes-upgrade-810093 192.168.39.0/24 created
	I0930 20:48:41.978069   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:41.977938   53628 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:48:41.978104   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 20:48:42.242044   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:42.241879   53628 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa...
	I0930 20:48:42.472727   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:42.472540   53628 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/kubernetes-upgrade-810093.rawdisk...
	I0930 20:48:42.472767   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Writing magic tar header
	I0930 20:48:42.472786   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Writing SSH key tar header
	I0930 20:48:42.472801   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:42.472717   53628 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093 ...
	I0930 20:48:42.472826   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093
	I0930 20:48:42.472869   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093 (perms=drwx------)
	I0930 20:48:42.472885   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 20:48:42.472899   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 20:48:42.472919   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 20:48:42.472932   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 20:48:42.472947   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 20:48:42.472958   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 20:48:42.472994   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:48:42.473030   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 20:48:42.473043   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Creating domain...
	I0930 20:48:42.473061   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 20:48:42.473074   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Checking permissions on dir: /home/jenkins
	I0930 20:48:42.473084   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Checking permissions on dir: /home
	I0930 20:48:42.473090   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Skipping /home - not owner
	I0930 20:48:42.474268   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) define libvirt domain using xml: 
	I0930 20:48:42.474292   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) <domain type='kvm'>
	I0930 20:48:42.474304   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)   <name>kubernetes-upgrade-810093</name>
	I0930 20:48:42.474313   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)   <memory unit='MiB'>2200</memory>
	I0930 20:48:42.474321   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)   <vcpu>2</vcpu>
	I0930 20:48:42.474327   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)   <features>
	I0930 20:48:42.474335   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <acpi/>
	I0930 20:48:42.474344   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <apic/>
	I0930 20:48:42.474362   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <pae/>
	I0930 20:48:42.474372   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     
	I0930 20:48:42.474379   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)   </features>
	I0930 20:48:42.474391   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)   <cpu mode='host-passthrough'>
	I0930 20:48:42.474402   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)   
	I0930 20:48:42.474411   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)   </cpu>
	I0930 20:48:42.474420   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)   <os>
	I0930 20:48:42.474430   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <type>hvm</type>
	I0930 20:48:42.474438   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <boot dev='cdrom'/>
	I0930 20:48:42.474449   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <boot dev='hd'/>
	I0930 20:48:42.474459   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <bootmenu enable='no'/>
	I0930 20:48:42.474472   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)   </os>
	I0930 20:48:42.474483   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)   <devices>
	I0930 20:48:42.474493   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <disk type='file' device='cdrom'>
	I0930 20:48:42.474504   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/boot2docker.iso'/>
	I0930 20:48:42.474515   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)       <target dev='hdc' bus='scsi'/>
	I0930 20:48:42.474525   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)       <readonly/>
	I0930 20:48:42.474545   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     </disk>
	I0930 20:48:42.474579   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <disk type='file' device='disk'>
	I0930 20:48:42.474609   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 20:48:42.474628   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/kubernetes-upgrade-810093.rawdisk'/>
	I0930 20:48:42.474649   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)       <target dev='hda' bus='virtio'/>
	I0930 20:48:42.474659   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     </disk>
	I0930 20:48:42.474671   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <interface type='network'>
	I0930 20:48:42.474685   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)       <source network='mk-kubernetes-upgrade-810093'/>
	I0930 20:48:42.474696   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)       <model type='virtio'/>
	I0930 20:48:42.474704   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     </interface>
	I0930 20:48:42.474714   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <interface type='network'>
	I0930 20:48:42.474723   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)       <source network='default'/>
	I0930 20:48:42.474733   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)       <model type='virtio'/>
	I0930 20:48:42.474738   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     </interface>
	I0930 20:48:42.474746   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <serial type='pty'>
	I0930 20:48:42.474760   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)       <target port='0'/>
	I0930 20:48:42.474778   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     </serial>
	I0930 20:48:42.474790   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <console type='pty'>
	I0930 20:48:42.474801   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)       <target type='serial' port='0'/>
	I0930 20:48:42.474826   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     </console>
	I0930 20:48:42.474850   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     <rng model='virtio'>
	I0930 20:48:42.474865   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)       <backend model='random'>/dev/random</backend>
	I0930 20:48:42.474875   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     </rng>
	I0930 20:48:42.474883   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     
	I0930 20:48:42.474894   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)     
	I0930 20:48:42.474902   52674 main.go:141] libmachine: (kubernetes-upgrade-810093)   </devices>
	I0930 20:48:42.474909   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) </domain>
	I0930 20:48:42.474917   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) 
	I0930 20:48:42.479261   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:03:7a:ad in network default
	I0930 20:48:42.479861   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Ensuring networks are active...
	I0930 20:48:42.479881   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:42.480593   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Ensuring network default is active
	I0930 20:48:42.480917   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Ensuring network mk-kubernetes-upgrade-810093 is active
	I0930 20:48:42.481472   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Getting domain xml...
	I0930 20:48:42.482335   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Creating domain...
	I0930 20:48:43.927151   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Waiting to get IP...
	I0930 20:48:43.928268   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:43.929048   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:48:43.929075   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:43.928964   53628 retry.go:31] will retry after 258.923956ms: waiting for machine to come up
	I0930 20:48:44.189585   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:44.190135   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:48:44.190165   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:44.190099   53628 retry.go:31] will retry after 340.539623ms: waiting for machine to come up
	I0930 20:48:44.532799   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:44.533362   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:48:44.533392   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:44.533304   53628 retry.go:31] will retry after 414.041544ms: waiting for machine to come up
	I0930 20:48:44.948938   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:44.949505   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:48:44.949536   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:44.949463   53628 retry.go:31] will retry after 411.086518ms: waiting for machine to come up
	I0930 20:48:45.362136   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:45.362644   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:48:45.362672   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:45.362563   53628 retry.go:31] will retry after 503.442618ms: waiting for machine to come up
	I0930 20:48:45.867326   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:45.867831   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:48:45.867862   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:45.867789   53628 retry.go:31] will retry after 609.222673ms: waiting for machine to come up
	I0930 20:48:46.478396   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:46.478911   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:48:46.478941   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:46.478859   53628 retry.go:31] will retry after 772.128772ms: waiting for machine to come up
	I0930 20:48:47.252600   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:47.252977   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:48:47.253010   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:47.252955   53628 retry.go:31] will retry after 953.800362ms: waiting for machine to come up
	I0930 20:48:48.208841   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:48.209239   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:48:48.209260   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:48.209208   53628 retry.go:31] will retry after 1.124895983s: waiting for machine to come up
	I0930 20:48:49.335369   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:49.335915   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:48:49.335943   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:49.335862   53628 retry.go:31] will retry after 1.897911197s: waiting for machine to come up
	I0930 20:48:51.235892   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:51.236423   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:48:51.236452   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:51.236373   53628 retry.go:31] will retry after 2.563747536s: waiting for machine to come up
	I0930 20:48:53.802760   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:53.803270   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:48:53.803298   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:53.803229   53628 retry.go:31] will retry after 2.831462241s: waiting for machine to come up
	I0930 20:48:56.638337   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:48:56.638636   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:48:56.638653   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:48:56.638594   53628 retry.go:31] will retry after 4.126088242s: waiting for machine to come up
	I0930 20:49:00.768476   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:00.768905   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find current IP address of domain kubernetes-upgrade-810093 in network mk-kubernetes-upgrade-810093
	I0930 20:49:00.768931   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | I0930 20:49:00.768845   53628 retry.go:31] will retry after 3.494922377s: waiting for machine to come up
	I0930 20:49:04.266991   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:04.267598   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Found IP for machine: 192.168.39.233
	I0930 20:49:04.267624   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has current primary IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:04.267641   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Reserving static IP address...
	I0930 20:49:04.268002   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-810093", mac: "52:54:00:dc:41:fe", ip: "192.168.39.233"} in network mk-kubernetes-upgrade-810093
	I0930 20:49:04.348146   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Getting to WaitForSSH function...
	I0930 20:49:04.348179   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Reserved static IP address: 192.168.39.233
	I0930 20:49:04.348192   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Waiting for SSH to be available...
	I0930 20:49:04.351244   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:04.351968   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093
	I0930 20:49:04.351996   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-810093 interface with MAC address 52:54:00:dc:41:fe
	I0930 20:49:04.352186   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Using SSH client type: external
	I0930 20:49:04.352229   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa (-rw-------)
	I0930 20:49:04.352292   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 20:49:04.352320   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | About to run SSH command:
	I0930 20:49:04.352346   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | exit 0
	I0930 20:49:04.355970   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | SSH cmd err, output: exit status 255: 
	I0930 20:49:04.355991   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0930 20:49:04.355998   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | command : exit 0
	I0930 20:49:04.356005   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | err     : exit status 255
	I0930 20:49:04.356015   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | output  : 
	I0930 20:49:07.356880   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Getting to WaitForSSH function...
	I0930 20:49:07.359398   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.359956   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:07.359998   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.360186   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Using SSH client type: external
	I0930 20:49:07.360216   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa (-rw-------)
	I0930 20:49:07.360247   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 20:49:07.360261   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | About to run SSH command:
	I0930 20:49:07.360270   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | exit 0
	I0930 20:49:07.487430   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | SSH cmd err, output: <nil>: 
	I0930 20:49:07.487733   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) KVM machine creation complete!
	I0930 20:49:07.488066   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetConfigRaw
	I0930 20:49:07.488682   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:49:07.488879   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:49:07.489062   52674 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 20:49:07.489081   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetState
	I0930 20:49:07.490370   52674 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 20:49:07.490384   52674 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 20:49:07.490389   52674 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 20:49:07.490408   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:49:07.492920   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.493283   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:07.493312   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.493521   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:49:07.493704   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:07.493850   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:07.493968   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:49:07.494144   52674 main.go:141] libmachine: Using SSH client type: native
	I0930 20:49:07.494335   52674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:49:07.494346   52674 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 20:49:07.602813   52674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:49:07.602838   52674 main.go:141] libmachine: Detecting the provisioner...
	I0930 20:49:07.602845   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:49:07.606572   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.607096   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:07.607129   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.607359   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:49:07.607568   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:07.607762   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:07.607948   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:49:07.608106   52674 main.go:141] libmachine: Using SSH client type: native
	I0930 20:49:07.608271   52674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:49:07.608281   52674 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 20:49:07.720452   52674 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 20:49:07.720533   52674 main.go:141] libmachine: found compatible host: buildroot
	I0930 20:49:07.720546   52674 main.go:141] libmachine: Provisioning with buildroot...
	I0930 20:49:07.720558   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetMachineName
	I0930 20:49:07.720848   52674 buildroot.go:166] provisioning hostname "kubernetes-upgrade-810093"
	I0930 20:49:07.720879   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetMachineName
	I0930 20:49:07.721101   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:49:07.724415   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.724827   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:07.724857   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.725103   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:49:07.725344   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:07.725500   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:07.725645   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:49:07.725901   52674 main.go:141] libmachine: Using SSH client type: native
	I0930 20:49:07.726077   52674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:49:07.726088   52674 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-810093 && echo "kubernetes-upgrade-810093" | sudo tee /etc/hostname
	I0930 20:49:07.850712   52674 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-810093
	
	I0930 20:49:07.850745   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:49:07.853491   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.853820   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:07.853841   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.853976   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:49:07.854173   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:07.854334   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:07.854441   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:49:07.854607   52674 main.go:141] libmachine: Using SSH client type: native
	I0930 20:49:07.854821   52674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:49:07.854839   52674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-810093' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-810093/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-810093' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:49:07.976174   52674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:49:07.976206   52674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:49:07.976241   52674 buildroot.go:174] setting up certificates
	I0930 20:49:07.976249   52674 provision.go:84] configureAuth start
	I0930 20:49:07.976257   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetMachineName
	I0930 20:49:07.976537   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetIP
	I0930 20:49:07.979475   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.979847   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:07.979879   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.980012   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:49:07.982512   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.982785   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:07.982817   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:07.983045   52674 provision.go:143] copyHostCerts
	I0930 20:49:07.983105   52674 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:49:07.983119   52674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:49:07.983183   52674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:49:07.983336   52674 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:49:07.983348   52674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:49:07.983463   52674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:49:07.983646   52674 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:49:07.983659   52674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:49:07.983695   52674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:49:07.983785   52674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-810093 san=[127.0.0.1 192.168.39.233 kubernetes-upgrade-810093 localhost minikube]
	I0930 20:49:08.054526   52674 provision.go:177] copyRemoteCerts
	I0930 20:49:08.054595   52674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:49:08.054619   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:49:08.057312   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.057604   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:08.057629   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.057817   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:49:08.058025   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:08.058195   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:49:08.058300   52674 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa Username:docker}
	I0930 20:49:08.141108   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:49:08.164029   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0930 20:49:08.188290   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 20:49:08.211776   52674 provision.go:87] duration metric: took 235.514872ms to configureAuth
	I0930 20:49:08.211806   52674 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:49:08.211967   52674 config.go:182] Loaded profile config "kubernetes-upgrade-810093": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 20:49:08.212032   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:49:08.214808   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.215102   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:08.215136   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.215356   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:49:08.215587   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:08.215738   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:08.215862   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:49:08.215980   52674 main.go:141] libmachine: Using SSH client type: native
	I0930 20:49:08.216154   52674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:49:08.216169   52674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:49:08.441511   52674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:49:08.441560   52674 main.go:141] libmachine: Checking connection to Docker...
	I0930 20:49:08.441572   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetURL
	I0930 20:49:08.442734   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | Using libvirt version 6000000
	I0930 20:49:08.444699   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.444996   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:08.445024   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.445184   52674 main.go:141] libmachine: Docker is up and running!
	I0930 20:49:08.445200   52674 main.go:141] libmachine: Reticulating splines...
	I0930 20:49:08.445209   52674 client.go:171] duration metric: took 26.553556125s to LocalClient.Create
	I0930 20:49:08.445238   52674 start.go:167] duration metric: took 26.553624111s to libmachine.API.Create "kubernetes-upgrade-810093"
	I0930 20:49:08.445251   52674 start.go:293] postStartSetup for "kubernetes-upgrade-810093" (driver="kvm2")
	I0930 20:49:08.445291   52674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:49:08.445315   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:49:08.445531   52674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:49:08.445551   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:49:08.447909   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.448197   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:08.448224   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.448368   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:49:08.448551   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:08.448745   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:49:08.448951   52674 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa Username:docker}
	I0930 20:49:08.533410   52674 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:49:08.538619   52674 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:49:08.538653   52674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:49:08.538723   52674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:49:08.538798   52674 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:49:08.538886   52674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:49:08.551771   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:49:08.578517   52674 start.go:296] duration metric: took 133.22325ms for postStartSetup
	I0930 20:49:08.578585   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetConfigRaw
	I0930 20:49:08.579390   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetIP
	I0930 20:49:08.582360   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.582838   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:08.582866   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.583112   52674 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/config.json ...
	I0930 20:49:08.583351   52674 start.go:128] duration metric: took 26.714612025s to createHost
	I0930 20:49:08.583377   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:49:08.586082   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.586399   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:08.586436   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.586610   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:49:08.586800   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:08.586946   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:08.587087   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:49:08.587231   52674 main.go:141] libmachine: Using SSH client type: native
	I0930 20:49:08.587405   52674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:49:08.587415   52674 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:49:08.700064   52674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727729348.679660367
	
	I0930 20:49:08.700090   52674 fix.go:216] guest clock: 1727729348.679660367
	I0930 20:49:08.700100   52674 fix.go:229] Guest: 2024-09-30 20:49:08.679660367 +0000 UTC Remote: 2024-09-30 20:49:08.58336658 +0000 UTC m=+92.066264569 (delta=96.293787ms)
	I0930 20:49:08.700129   52674 fix.go:200] guest clock delta is within tolerance: 96.293787ms
	I0930 20:49:08.700137   52674 start.go:83] releasing machines lock for "kubernetes-upgrade-810093", held for 26.831686675s
	I0930 20:49:08.700162   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:49:08.700440   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetIP
	I0930 20:49:08.703474   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.703876   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:08.703904   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.704085   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:49:08.704663   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:49:08.704835   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:49:08.704934   52674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:49:08.705021   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:49:08.705033   52674 ssh_runner.go:195] Run: cat /version.json
	I0930 20:49:08.705058   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:49:08.707852   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.708180   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:08.708203   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.708224   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.708509   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:49:08.708710   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:08.708770   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:08.708799   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:08.708923   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:49:08.708930   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:49:08.709121   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:49:08.709102   52674 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa Username:docker}
	I0930 20:49:08.709268   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:49:08.709404   52674 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa Username:docker}
	I0930 20:49:08.834552   52674 ssh_runner.go:195] Run: systemctl --version
	I0930 20:49:08.840485   52674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:49:09.002906   52674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:49:09.009596   52674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:49:09.009670   52674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:49:09.032801   52674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 20:49:09.032828   52674 start.go:495] detecting cgroup driver to use...
	I0930 20:49:09.032900   52674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:49:09.049363   52674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:49:09.062819   52674 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:49:09.062875   52674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:49:09.078315   52674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:49:09.092435   52674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:49:09.212902   52674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:49:09.364715   52674 docker.go:233] disabling docker service ...
	I0930 20:49:09.364787   52674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:49:09.382174   52674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:49:09.395327   52674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:49:09.562984   52674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:49:09.695982   52674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:49:09.710299   52674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:49:09.729497   52674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0930 20:49:09.729582   52674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:49:09.740303   52674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:49:09.740407   52674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:49:09.751523   52674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:49:09.763234   52674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:49:09.775095   52674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:49:09.786608   52674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:49:09.797150   52674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 20:49:09.797221   52674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 20:49:09.810887   52674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:49:09.821704   52674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:49:09.941956   52674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:49:10.037671   52674 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:49:10.037740   52674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:49:10.042652   52674 start.go:563] Will wait 60s for crictl version
	I0930 20:49:10.042730   52674 ssh_runner.go:195] Run: which crictl
	I0930 20:49:10.046477   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:49:10.091999   52674 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:49:10.092095   52674 ssh_runner.go:195] Run: crio --version
	I0930 20:49:10.126715   52674 ssh_runner.go:195] Run: crio --version
	I0930 20:49:10.159082   52674 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0930 20:49:10.160057   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetIP
	I0930 20:49:10.163479   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:10.163822   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:48:56 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:49:10.163847   52674 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:49:10.164094   52674 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:49:10.168109   52674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:49:10.179798   52674 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-810093 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-810093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 20:49:10.179902   52674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 20:49:10.179945   52674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:49:10.222926   52674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 20:49:10.223028   52674 ssh_runner.go:195] Run: which lz4
	I0930 20:49:10.228616   52674 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 20:49:10.233058   52674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 20:49:10.233092   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0930 20:49:11.784683   52674 crio.go:462] duration metric: took 1.556106425s to copy over tarball
	I0930 20:49:11.784747   52674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 20:49:14.483091   52674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.698316805s)
	I0930 20:49:14.483132   52674 crio.go:469] duration metric: took 2.698420619s to extract the tarball
	I0930 20:49:14.483142   52674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 20:49:14.526295   52674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:49:14.572031   52674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 20:49:14.572063   52674 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 20:49:14.572536   52674 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:49:14.572593   52674 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0930 20:49:14.572624   52674 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:49:14.572547   52674 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0930 20:49:14.572659   52674 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:49:14.572545   52674 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:49:14.572599   52674 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0930 20:49:14.572605   52674 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:49:14.574149   52674 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:49:14.574166   52674 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:49:14.574216   52674 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0930 20:49:14.574221   52674 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:49:14.574239   52674 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:49:14.574156   52674 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0930 20:49:14.574155   52674 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0930 20:49:14.574390   52674 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:49:14.813956   52674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0930 20:49:14.855143   52674 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0930 20:49:14.855195   52674 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0930 20:49:14.855235   52674 ssh_runner.go:195] Run: which crictl
	I0930 20:49:14.859541   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 20:49:14.868691   52674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0930 20:49:14.891497   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 20:49:14.892357   52674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:49:14.892863   52674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:49:14.896798   52674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:49:14.915650   52674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0930 20:49:14.920333   52674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:49:14.946250   52674 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0930 20:49:14.946305   52674 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0930 20:49:14.946356   52674 ssh_runner.go:195] Run: which crictl
	I0930 20:49:15.021978   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 20:49:15.068492   52674 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0930 20:49:15.068597   52674 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0930 20:49:15.068659   52674 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:49:15.068712   52674 ssh_runner.go:195] Run: which crictl
	I0930 20:49:15.068607   52674 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:49:15.068808   52674 ssh_runner.go:195] Run: which crictl
	I0930 20:49:15.068520   52674 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0930 20:49:15.068875   52674 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:49:15.068905   52674 ssh_runner.go:195] Run: which crictl
	I0930 20:49:15.068620   52674 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0930 20:49:15.068972   52674 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0930 20:49:15.069011   52674 ssh_runner.go:195] Run: which crictl
	I0930 20:49:15.074500   52674 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0930 20:49:15.074534   52674 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:49:15.074572   52674 ssh_runner.go:195] Run: which crictl
	I0930 20:49:15.074579   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 20:49:15.109609   52674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0930 20:49:15.109688   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:49:15.109709   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:49:15.109782   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:49:15.109796   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 20:49:15.133436   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:49:15.133730   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 20:49:15.221529   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:49:15.225929   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:49:15.225991   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 20:49:15.226030   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:49:15.263130   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 20:49:15.263189   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:49:15.348466   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:49:15.376219   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:49:15.376335   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 20:49:15.376336   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:49:15.420699   52674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0930 20:49:15.420799   52674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:49:15.420854   52674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0930 20:49:15.489670   52674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0930 20:49:15.489720   52674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0930 20:49:15.489808   52674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0930 20:49:15.502702   52674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0930 20:49:15.879373   52674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:49:16.021471   52674 cache_images.go:92] duration metric: took 1.448995165s to LoadCachedImages
	W0930 20:49:16.021553   52674 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0930 20:49:16.021570   52674 kubeadm.go:934] updating node { 192.168.39.233 8443 v1.20.0 crio true true} ...
	I0930 20:49:16.021700   52674 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-810093 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-810093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:49:16.021791   52674 ssh_runner.go:195] Run: crio config
	I0930 20:49:16.071386   52674 cni.go:84] Creating CNI manager for ""
	I0930 20:49:16.071413   52674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:49:16.071425   52674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 20:49:16.071443   52674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.233 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-810093 NodeName:kubernetes-upgrade-810093 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0930 20:49:16.071621   52674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.233
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-810093"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 20:49:16.071688   52674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0930 20:49:16.085252   52674 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 20:49:16.085337   52674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 20:49:16.096229   52674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0930 20:49:16.113403   52674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:49:16.131226   52674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0930 20:49:16.152165   52674 ssh_runner.go:195] Run: grep 192.168.39.233	control-plane.minikube.internal$ /etc/hosts
	I0930 20:49:16.156295   52674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:49:16.169119   52674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:49:16.296425   52674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:49:16.313189   52674 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093 for IP: 192.168.39.233
	I0930 20:49:16.313223   52674 certs.go:194] generating shared ca certs ...
	I0930 20:49:16.313243   52674 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:49:16.313445   52674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:49:16.313503   52674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:49:16.313515   52674 certs.go:256] generating profile certs ...
	I0930 20:49:16.313592   52674 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/client.key
	I0930 20:49:16.313620   52674 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/client.crt with IP's: []
	I0930 20:49:16.497403   52674 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/client.crt ...
	I0930 20:49:16.497445   52674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/client.crt: {Name:mkb2d04a5a4fee4e63a9d3038c7d0e0d639c1cf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:49:16.497722   52674 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/client.key ...
	I0930 20:49:16.497761   52674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/client.key: {Name:mk48001bb719768c52fd43b70b434997b45f6a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:49:16.497899   52674 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.key.372be7b4
	I0930 20:49:16.497925   52674 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.crt.372be7b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.233]
	I0930 20:49:16.750393   52674 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.crt.372be7b4 ...
	I0930 20:49:16.750424   52674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.crt.372be7b4: {Name:mke17ab8d6ea4ccb242006bdd29faf418e8737d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:49:16.750594   52674 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.key.372be7b4 ...
	I0930 20:49:16.750617   52674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.key.372be7b4: {Name:mkae35d78832826340aa8ee45e08a756ddf75ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:49:16.750722   52674 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.crt.372be7b4 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.crt
	I0930 20:49:16.750817   52674 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.key.372be7b4 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.key
	I0930 20:49:16.750897   52674 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.key
	I0930 20:49:16.750918   52674 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.crt with IP's: []
	I0930 20:49:16.917441   52674 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.crt ...
	I0930 20:49:16.917476   52674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.crt: {Name:mkfc0aef0026f6d3b7eac6b11585809ed47f2765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:49:16.917680   52674 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.key ...
	I0930 20:49:16.917698   52674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.key: {Name:mkb9734932af3f3e3c5db21be886341f3c4d72bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:49:16.917895   52674 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:49:16.917935   52674 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:49:16.917944   52674 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:49:16.917965   52674 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:49:16.917989   52674 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:49:16.918011   52674 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:49:16.918047   52674 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:49:16.918787   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:49:16.950803   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:49:16.977433   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:49:17.014302   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:49:17.043105   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0930 20:49:17.072730   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 20:49:17.108329   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:49:17.144117   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 20:49:17.194811   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:49:17.241579   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:49:17.268785   52674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:49:17.293830   52674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 20:49:17.312671   52674 ssh_runner.go:195] Run: openssl version
	I0930 20:49:17.318800   52674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:49:17.330327   52674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:49:17.336495   52674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:49:17.336574   52674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:49:17.343156   52674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:49:17.355365   52674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:49:17.368199   52674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:49:17.373062   52674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:49:17.373135   52674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:49:17.381337   52674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:49:17.393018   52674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:49:17.405546   52674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:49:17.410862   52674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:49:17.410939   52674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:49:17.417242   52674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:49:17.429747   52674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:49:17.434215   52674 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 20:49:17.434281   52674 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-810093 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-810093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:49:17.434376   52674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 20:49:17.434437   52674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 20:49:17.476908   52674 cri.go:89] found id: ""
	I0930 20:49:17.476995   52674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 20:49:17.487745   52674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 20:49:17.501978   52674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 20:49:17.515872   52674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 20:49:17.515896   52674 kubeadm.go:157] found existing configuration files:
	
	I0930 20:49:17.515949   52674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 20:49:17.526978   52674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 20:49:17.527046   52674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 20:49:17.537168   52674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 20:49:17.547058   52674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 20:49:17.547134   52674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 20:49:17.559102   52674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 20:49:17.569904   52674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 20:49:17.569979   52674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 20:49:17.580735   52674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 20:49:17.591342   52674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 20:49:17.591443   52674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 20:49:17.603350   52674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 20:49:17.722060   52674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 20:49:17.722141   52674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 20:49:17.865691   52674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 20:49:17.865830   52674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 20:49:17.866000   52674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 20:49:18.096489   52674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 20:49:18.098551   52674 out.go:235]   - Generating certificates and keys ...
	I0930 20:49:18.098657   52674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 20:49:18.098774   52674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 20:49:18.273526   52674 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 20:49:18.339090   52674 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 20:49:18.457521   52674 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 20:49:18.765159   52674 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 20:49:19.000299   52674 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 20:49:19.000507   52674 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-810093 localhost] and IPs [192.168.39.233 127.0.0.1 ::1]
	I0930 20:49:19.239384   52674 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 20:49:19.239603   52674 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-810093 localhost] and IPs [192.168.39.233 127.0.0.1 ::1]
	I0930 20:49:19.482825   52674 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 20:49:19.647790   52674 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 20:49:19.823627   52674 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 20:49:19.823994   52674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 20:49:20.162701   52674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 20:49:20.383126   52674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 20:49:20.644749   52674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 20:49:20.748651   52674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 20:49:20.766716   52674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 20:49:20.768136   52674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 20:49:20.768201   52674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 20:49:20.927561   52674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 20:49:20.930205   52674 out.go:235]   - Booting up control plane ...
	I0930 20:49:20.930342   52674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 20:49:20.934537   52674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 20:49:20.935576   52674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 20:49:20.940776   52674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 20:49:20.952638   52674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 20:50:00.948577   52674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 20:50:00.949364   52674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 20:50:00.949603   52674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 20:50:05.949303   52674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 20:50:05.949623   52674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 20:50:15.948880   52674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 20:50:15.949164   52674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 20:50:35.948824   52674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 20:50:35.949053   52674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 20:51:15.950619   52674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 20:51:15.950807   52674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 20:51:15.950817   52674 kubeadm.go:310] 
	I0930 20:51:15.950852   52674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 20:51:15.950885   52674 kubeadm.go:310] 		timed out waiting for the condition
	I0930 20:51:15.950892   52674 kubeadm.go:310] 
	I0930 20:51:15.950994   52674 kubeadm.go:310] 	This error is likely caused by:
	I0930 20:51:15.951060   52674 kubeadm.go:310] 		- The kubelet is not running
	I0930 20:51:15.951189   52674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 20:51:15.951199   52674 kubeadm.go:310] 
	I0930 20:51:15.951285   52674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 20:51:15.951319   52674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 20:51:15.951348   52674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 20:51:15.951354   52674 kubeadm.go:310] 
	I0930 20:51:15.951449   52674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 20:51:15.951520   52674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 20:51:15.951540   52674 kubeadm.go:310] 
	I0930 20:51:15.951631   52674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 20:51:15.951764   52674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 20:51:15.951866   52674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 20:51:15.951981   52674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 20:51:15.951991   52674 kubeadm.go:310] 
	I0930 20:51:15.952759   52674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 20:51:15.952888   52674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 20:51:15.952957   52674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0930 20:51:15.953092   52674 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-810093 localhost] and IPs [192.168.39.233 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-810093 localhost] and IPs [192.168.39.233 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-810093 localhost] and IPs [192.168.39.233 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-810093 localhost] and IPs [192.168.39.233 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0930 20:51:15.953140   52674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 20:51:16.406116   52674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:51:16.420981   52674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 20:51:16.432004   52674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 20:51:16.432028   52674 kubeadm.go:157] found existing configuration files:
	
	I0930 20:51:16.432069   52674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 20:51:16.442254   52674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 20:51:16.442328   52674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 20:51:16.452512   52674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 20:51:16.461908   52674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 20:51:16.461973   52674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 20:51:16.472000   52674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 20:51:16.481433   52674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 20:51:16.481508   52674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 20:51:16.491148   52674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 20:51:16.500388   52674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 20:51:16.500469   52674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 20:51:16.510111   52674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 20:51:16.720122   52674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 20:53:12.769769   52674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 20:53:12.769898   52674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0930 20:53:12.771412   52674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 20:53:12.771511   52674 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 20:53:12.771644   52674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 20:53:12.771789   52674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 20:53:12.771932   52674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 20:53:12.772024   52674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 20:53:12.774895   52674 out.go:235]   - Generating certificates and keys ...
	I0930 20:53:12.775005   52674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 20:53:12.775101   52674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 20:53:12.775219   52674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 20:53:12.775304   52674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 20:53:12.775420   52674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 20:53:12.775518   52674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 20:53:12.775638   52674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 20:53:12.775736   52674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 20:53:12.775850   52674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 20:53:12.775965   52674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 20:53:12.776028   52674 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 20:53:12.776131   52674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 20:53:12.776214   52674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 20:53:12.776290   52674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 20:53:12.776387   52674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 20:53:12.776478   52674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 20:53:12.776630   52674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 20:53:12.776766   52674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 20:53:12.776826   52674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 20:53:12.776919   52674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 20:53:12.778519   52674 out.go:235]   - Booting up control plane ...
	I0930 20:53:12.778640   52674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 20:53:12.778741   52674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 20:53:12.778840   52674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 20:53:12.778916   52674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 20:53:12.779054   52674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 20:53:12.779131   52674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 20:53:12.779205   52674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 20:53:12.779445   52674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 20:53:12.779505   52674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 20:53:12.779742   52674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 20:53:12.779860   52674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 20:53:12.780120   52674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 20:53:12.780214   52674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 20:53:12.780395   52674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 20:53:12.780466   52674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 20:53:12.780626   52674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 20:53:12.780634   52674 kubeadm.go:310] 
	I0930 20:53:12.780668   52674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 20:53:12.780746   52674 kubeadm.go:310] 		timed out waiting for the condition
	I0930 20:53:12.780766   52674 kubeadm.go:310] 
	I0930 20:53:12.780821   52674 kubeadm.go:310] 	This error is likely caused by:
	I0930 20:53:12.780876   52674 kubeadm.go:310] 		- The kubelet is not running
	I0930 20:53:12.781025   52674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 20:53:12.781035   52674 kubeadm.go:310] 
	I0930 20:53:12.781200   52674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 20:53:12.781275   52674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 20:53:12.781322   52674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 20:53:12.781334   52674 kubeadm.go:310] 
	I0930 20:53:12.781467   52674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 20:53:12.781599   52674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 20:53:12.781611   52674 kubeadm.go:310] 
	I0930 20:53:12.781755   52674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 20:53:12.781898   52674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 20:53:12.782014   52674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 20:53:12.782112   52674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 20:53:12.782192   52674 kubeadm.go:310] 
	I0930 20:53:12.782196   52674 kubeadm.go:394] duration metric: took 3m55.347919017s to StartCluster
	I0930 20:53:12.782276   52674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 20:53:12.782344   52674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 20:53:12.829148   52674 cri.go:89] found id: ""
	I0930 20:53:12.829176   52674 logs.go:276] 0 containers: []
	W0930 20:53:12.829186   52674 logs.go:278] No container was found matching "kube-apiserver"
	I0930 20:53:12.829193   52674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 20:53:12.829258   52674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 20:53:12.863625   52674 cri.go:89] found id: ""
	I0930 20:53:12.863657   52674 logs.go:276] 0 containers: []
	W0930 20:53:12.863668   52674 logs.go:278] No container was found matching "etcd"
	I0930 20:53:12.863675   52674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 20:53:12.863752   52674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 20:53:12.909344   52674 cri.go:89] found id: ""
	I0930 20:53:12.909374   52674 logs.go:276] 0 containers: []
	W0930 20:53:12.909385   52674 logs.go:278] No container was found matching "coredns"
	I0930 20:53:12.909393   52674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 20:53:12.909452   52674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 20:53:12.944103   52674 cri.go:89] found id: ""
	I0930 20:53:12.944139   52674 logs.go:276] 0 containers: []
	W0930 20:53:12.944158   52674 logs.go:278] No container was found matching "kube-scheduler"
	I0930 20:53:12.944165   52674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 20:53:12.944229   52674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 20:53:12.984910   52674 cri.go:89] found id: ""
	I0930 20:53:12.984943   52674 logs.go:276] 0 containers: []
	W0930 20:53:12.984952   52674 logs.go:278] No container was found matching "kube-proxy"
	I0930 20:53:12.984958   52674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 20:53:12.985023   52674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 20:53:13.021081   52674 cri.go:89] found id: ""
	I0930 20:53:13.021116   52674 logs.go:276] 0 containers: []
	W0930 20:53:13.021127   52674 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 20:53:13.021133   52674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 20:53:13.021194   52674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 20:53:13.058928   52674 cri.go:89] found id: ""
	I0930 20:53:13.058972   52674 logs.go:276] 0 containers: []
	W0930 20:53:13.058983   52674 logs.go:278] No container was found matching "kindnet"
	I0930 20:53:13.059002   52674 logs.go:123] Gathering logs for CRI-O ...
	I0930 20:53:13.059017   52674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 20:53:13.196380   52674 logs.go:123] Gathering logs for container status ...
	I0930 20:53:13.196422   52674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 20:53:13.245980   52674 logs.go:123] Gathering logs for kubelet ...
	I0930 20:53:13.246018   52674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 20:53:13.296992   52674 logs.go:123] Gathering logs for dmesg ...
	I0930 20:53:13.297030   52674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 20:53:13.314081   52674 logs.go:123] Gathering logs for describe nodes ...
	I0930 20:53:13.314119   52674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 20:53:13.453171   52674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0930 20:53:13.453202   52674 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0930 20:53:13.453248   52674 out.go:270] * 
	* 
	W0930 20:53:13.453313   52674 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 20:53:13.453332   52674 out.go:270] * 
	* 
	W0930 20:53:13.454231   52674 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 20:53:13.457652   52674 out.go:201] 
	W0930 20:53:13.459021   52674 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 20:53:13.459080   52674 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0930 20:53:13.459105   52674 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0930 20:53:13.460605   52674 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-810093 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-810093
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-810093: (1.489289331s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-810093 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-810093 status --format={{.Host}}: exit status 7 (74.668677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-810093 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0930 20:53:28.936573   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-810093 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.135563071s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-810093 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-810093 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-810093 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (85.254331ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-810093] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-810093
	    minikube start -p kubernetes-upgrade-810093 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8100932 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-810093 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-810093 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-810093 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (34.382086767s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-30 20:54:50.749493691 +0000 UTC m=+4611.454249815
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-810093 -n kubernetes-upgrade-810093
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-810093 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-810093 logs -n 25: (1.756656612s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-280515             | cert-options-280515       | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:51 UTC |
	| start   | -p force-systemd-flag-188130       | force-systemd-flag-188130 | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:52 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-592556 sudo        | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-592556             | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:51 UTC |
	| start   | -p cert-expiration-988243          | cert-expiration-988243    | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:52 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-456540          | running-upgrade-456540    | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:51 UTC |
	| start   | -p pause-617008 --memory=2048      | pause-617008              | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:53 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-188130 ssh cat  | force-systemd-flag-188130 | jenkins | v1.34.0 | 30 Sep 24 20:52 UTC | 30 Sep 24 20:52 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-188130       | force-systemd-flag-188130 | jenkins | v1.34.0 | 30 Sep 24 20:52 UTC | 30 Sep 24 20:52 UTC |
	| start   | -p auto-207733 --memory=3072       | auto-207733               | jenkins | v1.34.0 | 30 Sep 24 20:52 UTC | 30 Sep 24 20:54 UTC |
	|         | --alsologtostderr --wait=true      |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-810093       | kubernetes-upgrade-810093 | jenkins | v1.34.0 | 30 Sep 24 20:53 UTC | 30 Sep 24 20:53 UTC |
	| start   | -p kubernetes-upgrade-810093       | kubernetes-upgrade-810093 | jenkins | v1.34.0 | 30 Sep 24 20:53 UTC | 30 Sep 24 20:54 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-617008                    | pause-617008              | jenkins | v1.34.0 | 30 Sep 24 20:53 UTC | 30 Sep 24 20:54 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-810093       | kubernetes-upgrade-810093 | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-810093       | kubernetes-upgrade-810093 | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC | 30 Sep 24 20:54 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p auto-207733 pgrep -a            | auto-207733               | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC | 30 Sep 24 20:54 UTC |
	|         | kubelet                            |                           |         |         |                     |                     |
	| ssh     | -p auto-207733 sudo cat            | auto-207733               | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC | 30 Sep 24 20:54 UTC |
	|         | /etc/nsswitch.conf                 |                           |         |         |                     |                     |
	| ssh     | -p auto-207733 sudo cat            | auto-207733               | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC | 30 Sep 24 20:54 UTC |
	|         | /etc/hosts                         |                           |         |         |                     |                     |
	| ssh     | -p auto-207733 sudo cat            | auto-207733               | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC | 30 Sep 24 20:54 UTC |
	|         | /etc/resolv.conf                   |                           |         |         |                     |                     |
	| ssh     | -p auto-207733 sudo crictl         | auto-207733               | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC | 30 Sep 24 20:54 UTC |
	|         | pods                               |                           |         |         |                     |                     |
	| ssh     | -p auto-207733 sudo crictl ps      | auto-207733               | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC | 30 Sep 24 20:54 UTC |
	|         | --all                              |                           |         |         |                     |                     |
	| delete  | -p pause-617008                    | pause-617008              | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC | 30 Sep 24 20:54 UTC |
	| ssh     | -p auto-207733 sudo find           | auto-207733               | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC | 30 Sep 24 20:54 UTC |
	|         | /etc/cni -type f -exec sh -c       |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;               |                           |         |         |                     |                     |
	| ssh     | -p auto-207733 sudo ip a s         | auto-207733               | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC |                     |
	| start   | -p kindnet-207733                  | kindnet-207733            | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC |                     |
	|         | --memory=3072                      |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true      |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                 |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2        |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 20:54:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 20:54:51.191788   59421 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:54:51.191976   59421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:54:51.192011   59421 out.go:358] Setting ErrFile to fd 2...
	I0930 20:54:51.192028   59421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:54:51.192221   59421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:54:51.192783   59421 out.go:352] Setting JSON to false
	I0930 20:54:51.193876   59421 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5834,"bootTime":1727723857,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 20:54:51.193963   59421 start.go:139] virtualization: kvm guest
	I0930 20:54:51.196083   59421 out.go:177] * [kindnet-207733] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 20:54:51.197499   59421 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 20:54:51.197512   59421 notify.go:220] Checking for updates...
	I0930 20:54:51.200050   59421 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 20:54:51.201361   59421 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:54:51.202483   59421 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:54:51.203687   59421 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 20:54:51.204805   59421 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 20:54:51.206706   59421 config.go:182] Loaded profile config "auto-207733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:54:51.206835   59421 config.go:182] Loaded profile config "cert-expiration-988243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:54:51.206960   59421 config.go:182] Loaded profile config "kubernetes-upgrade-810093": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:54:51.207068   59421 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 20:54:51.254828   59421 out.go:177] * Using the kvm2 driver based on user configuration
	
	
	==> CRI-O <==
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.751840465Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d92b2d50-ead4-440d-9be7-a963ba098372 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.754662546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68729499-fe2c-4179-8164-e824a49aae68 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.755189761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729691755161436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68729499-fe2c-4179-8164-e824a49aae68 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.756235554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bb3c381-7673-4f31-95c7-cf0b04d89d42 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.756334854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bb3c381-7673-4f31-95c7-cf0b04d89d42 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.756652202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1227b08909e6c16d26b96aa632e46329c54c0e7632450797896d303b6c2fbfe5,PodSandboxId:d11e13369eb27e1666a2a9de7093b718f71e5266f046655bdd493e13d223f318,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727729688192550523,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqlrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af352ae5-8860-4b35-9088-6f99aeb0db9d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfbb51c55d06761c975af0cdc9354c8f2117fa22de609480d89b0e1fdff2659,PodSandboxId:590926c7c6c82d72cfec692432f313ec8641ec2d5db800f580b3a13a28feeeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727729688147577994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33c33cb-48f8-4fa9-a8dd-95a6451bcd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67e2828d2cef6c6b9197c0c244f7f9275034c6d9b30c7479dd785b16630272,PodSandboxId:35fd41de47728e4f562764b1812c5f2b28f2f508b44c217de15188a8af26f4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729688137449603,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvg77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d76f3e61-2af8-4423-b25d-49eff5be6eca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d71c6d7a729d8912afb07b060eabbf855c5550fb3b7a33a56130972e44bfb22,PodSandboxId:153d3f5051ee4e5adedac7201ac8868aa55e290f3486e7908290a4e09644c362,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729688127068274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zbds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361fec9-d03c-4e9f-9d14-36
dbf0570f9c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ab40e16aaa5b7de76fc8fbedd303f0e7a71cf4fd17742ee8e5e5dca9d3b340,PodSandboxId:fc7f35083e8c2eb9eb159288bc0f3870f4e429f9aa7e717fc27d5395e96af5fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727729684311654470,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f1bc0dbecf7c792ed8c8ec3c18fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f88b10b033fc76252895b64cb10334e3458f59b362f4b51c26311b5d3f38496,PodSandboxId:7a6e549d59e8d73b7115363ff4c00d39a064230ea11d525d776441d8424726e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727729684322768
678,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af905448cbe4d30a92601b16cf788ce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd82d2044f507e83e7eacccce870673b3432ce2d96e92bc161cc4f1ccb0c8aaa,PodSandboxId:a51d8d87b9c2d8a47c4a4eb2fd86b49ca2ade90047ee091cc78e4c77142910c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727729684292
385780,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b504e2dca6457ee727177271c36cfed5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:337d11ebb704a8c584300a10c8fd8cb87ddf6e4dbd1208706e6a4c3beceb7507,PodSandboxId:0b7ad6ec595c3efc6d90b5c45675d594d01a1e46056d2430807afb400f694682,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727729684280890954,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b752add53ea2542f088c77bdf747b8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c50a6855e5a06b82abda38e7eee5fddd157ea694fed5ff2e3ae6a698d2e775aa,PodSandboxId:35fd41de47728e4f562764b1812c5f2b28f2f508b44c217de15188a8af26f4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729671933395017,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvg77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d76f3e61-2af8-4423-b25d-49eff5be6eca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001a67edfd9f5a38b0b3e7f99a5d666ab33b2bdeaa3c2dea2d16622669526006,PodSandboxId:67c65fe44c51433d8551df7b3ebf68b66500e853fc8d2acd41a06a9150216ba0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729669617282563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zbds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361fec9-d03c-4e9f-9d14-36dbf0570f9c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fbaf5340219dd96a03e65f0db4a8f2425f3922473b49a407d6261c8b3081d6c,PodSandboxId:83857159f6c07b66649e5fc024327d5ac8e4f7649046ea
dd2975cc4154678c25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727729668756348002,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqlrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af352ae5-8860-4b35-9088-6f99aeb0db9d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67efcceedcc1938ff6f66b0ea6cc17e34bda9faafb53db30f477c06b6e67ce7e,PodSandboxId:a7c3af13eedf01b0b4bdb9308d81add4d59c422e39fb70d4d2a28d46741b1a3f,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727729668591910423,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b504e2dca6457ee727177271c36cfed5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a55eb55be7941855076f7edfd053f259d99b7260e868cee65a9ba01a4c35171,PodSandboxId:7553f301e918e68ad94df8ff530a4831865d63f62142240f824a48245f6b6df2,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727729668583978174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af905448cbe4d30a92601b16cf788ce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5f2f3caa6596bdeec851c98c42549151d6a8fa20b96b21a5a850b7c0e5424c,PodSandboxId:92e1fdee43b11ee58fa055370f8fbeb25a5f5fb70d0328b55ae1895fb137af96,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727729668431487495,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f1bc0dbecf7c792ed8c8ec3c18fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b10de2fc402b0e70a6a180c0350151ea8f75757a4a692274cbf1cc58d95e9b9,PodSandboxId:3445607a77cce915e1f9b2a695235204fe485ac3359ac534d3ca76d2bf3a0062,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727729668313931477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b752add53ea2542f088c77bdf747b8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebecc3e71f4007d0365b4c87543b72b9d2c5f5ba7d29835a0ce6761e88986df4,PodSandboxId:e2e2e66615452535801d2fcf104f00f7703b867015b5e6ebdf59264594270148,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727729668110972746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33c33cb-48f8-4fa9-a8dd-95a6451bcd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bb3c381-7673-4f31-95c7-cf0b04d89d42 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.799595951Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d30e23c-a089-4a2c-8c54-8f03403f4e80 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.799690694Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d30e23c-a089-4a2c-8c54-8f03403f4e80 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.801722224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80f339c5-2ea8-4ca8-ba2b-065236542381 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.802167437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729691802137418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80f339c5-2ea8-4ca8-ba2b-065236542381 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.802716473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cddc0520-f3ae-42c2-be65-80836e62a7a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.802771213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cddc0520-f3ae-42c2-be65-80836e62a7a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.803222069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1227b08909e6c16d26b96aa632e46329c54c0e7632450797896d303b6c2fbfe5,PodSandboxId:d11e13369eb27e1666a2a9de7093b718f71e5266f046655bdd493e13d223f318,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727729688192550523,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqlrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af352ae5-8860-4b35-9088-6f99aeb0db9d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfbb51c55d06761c975af0cdc9354c8f2117fa22de609480d89b0e1fdff2659,PodSandboxId:590926c7c6c82d72cfec692432f313ec8641ec2d5db800f580b3a13a28feeeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727729688147577994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33c33cb-48f8-4fa9-a8dd-95a6451bcd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67e2828d2cef6c6b9197c0c244f7f9275034c6d9b30c7479dd785b16630272,PodSandboxId:35fd41de47728e4f562764b1812c5f2b28f2f508b44c217de15188a8af26f4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729688137449603,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvg77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d76f3e61-2af8-4423-b25d-49eff5be6eca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d71c6d7a729d8912afb07b060eabbf855c5550fb3b7a33a56130972e44bfb22,PodSandboxId:153d3f5051ee4e5adedac7201ac8868aa55e290f3486e7908290a4e09644c362,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729688127068274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zbds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361fec9-d03c-4e9f-9d14-36
dbf0570f9c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ab40e16aaa5b7de76fc8fbedd303f0e7a71cf4fd17742ee8e5e5dca9d3b340,PodSandboxId:fc7f35083e8c2eb9eb159288bc0f3870f4e429f9aa7e717fc27d5395e96af5fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727729684311654470,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f1bc0dbecf7c792ed8c8ec3c18fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f88b10b033fc76252895b64cb10334e3458f59b362f4b51c26311b5d3f38496,PodSandboxId:7a6e549d59e8d73b7115363ff4c00d39a064230ea11d525d776441d8424726e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727729684322768
678,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af905448cbe4d30a92601b16cf788ce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd82d2044f507e83e7eacccce870673b3432ce2d96e92bc161cc4f1ccb0c8aaa,PodSandboxId:a51d8d87b9c2d8a47c4a4eb2fd86b49ca2ade90047ee091cc78e4c77142910c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727729684292
385780,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b504e2dca6457ee727177271c36cfed5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:337d11ebb704a8c584300a10c8fd8cb87ddf6e4dbd1208706e6a4c3beceb7507,PodSandboxId:0b7ad6ec595c3efc6d90b5c45675d594d01a1e46056d2430807afb400f694682,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727729684280890954,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b752add53ea2542f088c77bdf747b8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c50a6855e5a06b82abda38e7eee5fddd157ea694fed5ff2e3ae6a698d2e775aa,PodSandboxId:35fd41de47728e4f562764b1812c5f2b28f2f508b44c217de15188a8af26f4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729671933395017,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvg77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d76f3e61-2af8-4423-b25d-49eff5be6eca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001a67edfd9f5a38b0b3e7f99a5d666ab33b2bdeaa3c2dea2d16622669526006,PodSandboxId:67c65fe44c51433d8551df7b3ebf68b66500e853fc8d2acd41a06a9150216ba0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729669617282563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zbds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361fec9-d03c-4e9f-9d14-36dbf0570f9c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fbaf5340219dd96a03e65f0db4a8f2425f3922473b49a407d6261c8b3081d6c,PodSandboxId:83857159f6c07b66649e5fc024327d5ac8e4f7649046ea
dd2975cc4154678c25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727729668756348002,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqlrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af352ae5-8860-4b35-9088-6f99aeb0db9d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67efcceedcc1938ff6f66b0ea6cc17e34bda9faafb53db30f477c06b6e67ce7e,PodSandboxId:a7c3af13eedf01b0b4bdb9308d81add4d59c422e39fb70d4d2a28d46741b1a3f,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727729668591910423,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b504e2dca6457ee727177271c36cfed5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a55eb55be7941855076f7edfd053f259d99b7260e868cee65a9ba01a4c35171,PodSandboxId:7553f301e918e68ad94df8ff530a4831865d63f62142240f824a48245f6b6df2,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727729668583978174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af905448cbe4d30a92601b16cf788ce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5f2f3caa6596bdeec851c98c42549151d6a8fa20b96b21a5a850b7c0e5424c,PodSandboxId:92e1fdee43b11ee58fa055370f8fbeb25a5f5fb70d0328b55ae1895fb137af96,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727729668431487495,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f1bc0dbecf7c792ed8c8ec3c18fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b10de2fc402b0e70a6a180c0350151ea8f75757a4a692274cbf1cc58d95e9b9,PodSandboxId:3445607a77cce915e1f9b2a695235204fe485ac3359ac534d3ca76d2bf3a0062,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727729668313931477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b752add53ea2542f088c77bdf747b8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebecc3e71f4007d0365b4c87543b72b9d2c5f5ba7d29835a0ce6761e88986df4,PodSandboxId:e2e2e66615452535801d2fcf104f00f7703b867015b5e6ebdf59264594270148,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727729668110972746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33c33cb-48f8-4fa9-a8dd-95a6451bcd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cddc0520-f3ae-42c2-be65-80836e62a7a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.844693402Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1beca205-9161-4e8f-bb31-e708c6a541bd name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.844768640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1beca205-9161-4e8f-bb31-e708c6a541bd name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.846357066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=341b04d8-2804-4aa8-9472-9f8377c47a42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.846871175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729691846828934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=341b04d8-2804-4aa8-9472-9f8377c47a42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.847576614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38069505-79ce-4266-8f40-a74d7275c0fa name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.847660642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38069505-79ce-4266-8f40-a74d7275c0fa name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.848217640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1227b08909e6c16d26b96aa632e46329c54c0e7632450797896d303b6c2fbfe5,PodSandboxId:d11e13369eb27e1666a2a9de7093b718f71e5266f046655bdd493e13d223f318,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727729688192550523,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqlrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af352ae5-8860-4b35-9088-6f99aeb0db9d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfbb51c55d06761c975af0cdc9354c8f2117fa22de609480d89b0e1fdff2659,PodSandboxId:590926c7c6c82d72cfec692432f313ec8641ec2d5db800f580b3a13a28feeeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727729688147577994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33c33cb-48f8-4fa9-a8dd-95a6451bcd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67e2828d2cef6c6b9197c0c244f7f9275034c6d9b30c7479dd785b16630272,PodSandboxId:35fd41de47728e4f562764b1812c5f2b28f2f508b44c217de15188a8af26f4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729688137449603,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvg77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d76f3e61-2af8-4423-b25d-49eff5be6eca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d71c6d7a729d8912afb07b060eabbf855c5550fb3b7a33a56130972e44bfb22,PodSandboxId:153d3f5051ee4e5adedac7201ac8868aa55e290f3486e7908290a4e09644c362,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729688127068274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zbds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361fec9-d03c-4e9f-9d14-36
dbf0570f9c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ab40e16aaa5b7de76fc8fbedd303f0e7a71cf4fd17742ee8e5e5dca9d3b340,PodSandboxId:fc7f35083e8c2eb9eb159288bc0f3870f4e429f9aa7e717fc27d5395e96af5fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727729684311654470,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f1bc0dbecf7c792ed8c8ec3c18fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f88b10b033fc76252895b64cb10334e3458f59b362f4b51c26311b5d3f38496,PodSandboxId:7a6e549d59e8d73b7115363ff4c00d39a064230ea11d525d776441d8424726e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727729684322768
678,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af905448cbe4d30a92601b16cf788ce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd82d2044f507e83e7eacccce870673b3432ce2d96e92bc161cc4f1ccb0c8aaa,PodSandboxId:a51d8d87b9c2d8a47c4a4eb2fd86b49ca2ade90047ee091cc78e4c77142910c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727729684292
385780,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b504e2dca6457ee727177271c36cfed5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:337d11ebb704a8c584300a10c8fd8cb87ddf6e4dbd1208706e6a4c3beceb7507,PodSandboxId:0b7ad6ec595c3efc6d90b5c45675d594d01a1e46056d2430807afb400f694682,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727729684280890954,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b752add53ea2542f088c77bdf747b8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c50a6855e5a06b82abda38e7eee5fddd157ea694fed5ff2e3ae6a698d2e775aa,PodSandboxId:35fd41de47728e4f562764b1812c5f2b28f2f508b44c217de15188a8af26f4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729671933395017,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvg77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d76f3e61-2af8-4423-b25d-49eff5be6eca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001a67edfd9f5a38b0b3e7f99a5d666ab33b2bdeaa3c2dea2d16622669526006,PodSandboxId:67c65fe44c51433d8551df7b3ebf68b66500e853fc8d2acd41a06a9150216ba0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729669617282563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zbds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361fec9-d03c-4e9f-9d14-36dbf0570f9c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fbaf5340219dd96a03e65f0db4a8f2425f3922473b49a407d6261c8b3081d6c,PodSandboxId:83857159f6c07b66649e5fc024327d5ac8e4f7649046ea
dd2975cc4154678c25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727729668756348002,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqlrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af352ae5-8860-4b35-9088-6f99aeb0db9d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67efcceedcc1938ff6f66b0ea6cc17e34bda9faafb53db30f477c06b6e67ce7e,PodSandboxId:a7c3af13eedf01b0b4bdb9308d81add4d59c422e39fb70d4d2a28d46741b1a3f,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727729668591910423,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b504e2dca6457ee727177271c36cfed5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a55eb55be7941855076f7edfd053f259d99b7260e868cee65a9ba01a4c35171,PodSandboxId:7553f301e918e68ad94df8ff530a4831865d63f62142240f824a48245f6b6df2,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727729668583978174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af905448cbe4d30a92601b16cf788ce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5f2f3caa6596bdeec851c98c42549151d6a8fa20b96b21a5a850b7c0e5424c,PodSandboxId:92e1fdee43b11ee58fa055370f8fbeb25a5f5fb70d0328b55ae1895fb137af96,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727729668431487495,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f1bc0dbecf7c792ed8c8ec3c18fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b10de2fc402b0e70a6a180c0350151ea8f75757a4a692274cbf1cc58d95e9b9,PodSandboxId:3445607a77cce915e1f9b2a695235204fe485ac3359ac534d3ca76d2bf3a0062,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727729668313931477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b752add53ea2542f088c77bdf747b8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebecc3e71f4007d0365b4c87543b72b9d2c5f5ba7d29835a0ce6761e88986df4,PodSandboxId:e2e2e66615452535801d2fcf104f00f7703b867015b5e6ebdf59264594270148,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727729668110972746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33c33cb-48f8-4fa9-a8dd-95a6451bcd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38069505-79ce-4266-8f40-a74d7275c0fa name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.861436936Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58b28b88-689b-4012-adfd-dfdad6aa60d7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.861649347Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:35fd41de47728e4f562764b1812c5f2b28f2f508b44c217de15188a8af26f4ca,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-cvg77,Uid:d76f3e61-2af8-4423-b25d-49eff5be6eca,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727729671561574852,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvg77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d76f3e61-2af8-4423-b25d-49eff5be6eca,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:54:14.719227593Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:153d3f5051ee4e5adedac7201ac8868aa55e290f3486e7908290a4e09644c362,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-zbds7,Uid:2361fec9-d03c-4e9f-9d14-36dbf0570f9c,Namespac
e:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727729671542548112,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-zbds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361fec9-d03c-4e9f-9d14-36dbf0570f9c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:54:14.748865956Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0b7ad6ec595c3efc6d90b5c45675d594d01a1e46056d2430807afb400f694682,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-810093,Uid:8b752add53ea2542f088c77bdf747b8d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727729671285685432,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b752add53ea2542f088c77bdf747b8d,tier: control-plane,},Ann
otations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.233:8443,kubernetes.io/config.hash: 8b752add53ea2542f088c77bdf747b8d,kubernetes.io/config.seen: 2024-09-30T20:53:59.525382744Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:590926c7c6c82d72cfec692432f313ec8641ec2d5db800f580b3a13a28feeeaf,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a33c33cb-48f8-4fa9-a8dd-95a6451bcd3c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727729671202176576,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33c33cb-48f8-4fa9-a8dd-95a6451bcd3c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/
mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-30T20:54:16.132900544Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d11e13369eb27e1666a2a9de7093b718f71e5266f046655bdd493e13d223f318,Metadata:&PodSandboxMetadata{Name:kube-proxy-pqlrj,Uid:af352ae5-8860-4b35-9088-6f99aeb0db9d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727729671128657631,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes
.pod.name: kube-proxy-pqlrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af352ae5-8860-4b35-9088-6f99aeb0db9d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:54:14.808847827Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fc7f35083e8c2eb9eb159288bc0f3870f4e429f9aa7e717fc27d5395e96af5fd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-810093,Uid:58f1bc0dbecf7c792ed8c8ec3c18fc70,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727729671098115765,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f1bc0dbecf7c792ed8c8ec3c18fc70,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 58f1bc0dbecf7c792ed8c8ec3c18fc70,kubernetes.io/config.seen: 2024-09-30T20:53:59.525378941Z,kubernetes.i
o/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7a6e549d59e8d73b7115363ff4c00d39a064230ea11d525d776441d8424726e4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-810093,Uid:af905448cbe4d30a92601b16cf788ce1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727729671078845874,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af905448cbe4d30a92601b16cf788ce1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: af905448cbe4d30a92601b16cf788ce1,kubernetes.io/config.seen: 2024-09-30T20:53:59.525384027Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a51d8d87b9c2d8a47c4a4eb2fd86b49ca2ade90047ee091cc78e4c77142910c7,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-810093,Uid:b504e2dca6457ee727177271c36cfed5,Namespace:kube-system,Atte
mpt:2,},State:SANDBOX_READY,CreatedAt:1727729671065403734,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b504e2dca6457ee727177271c36cfed5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.233:2379,kubernetes.io/config.hash: b504e2dca6457ee727177271c36cfed5,kubernetes.io/config.seen: 2024-09-30T20:53:59.568742858Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=58b28b88-689b-4012-adfd-dfdad6aa60d7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.862589345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=814c6d1b-fc61-4f2d-a370-099f6dad0a71 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.862693570Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=814c6d1b-fc61-4f2d-a370-099f6dad0a71 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:51 kubernetes-upgrade-810093 crio[3026]: time="2024-09-30 20:54:51.862967713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1227b08909e6c16d26b96aa632e46329c54c0e7632450797896d303b6c2fbfe5,PodSandboxId:d11e13369eb27e1666a2a9de7093b718f71e5266f046655bdd493e13d223f318,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727729688192550523,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqlrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af352ae5-8860-4b35-9088-6f99aeb0db9d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfbb51c55d06761c975af0cdc9354c8f2117fa22de609480d89b0e1fdff2659,PodSandboxId:590926c7c6c82d72cfec692432f313ec8641ec2d5db800f580b3a13a28feeeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727729688147577994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33c33cb-48f8-4fa9-a8dd-95a6451bcd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67e2828d2cef6c6b9197c0c244f7f9275034c6d9b30c7479dd785b16630272,PodSandboxId:35fd41de47728e4f562764b1812c5f2b28f2f508b44c217de15188a8af26f4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729688137449603,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvg77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d76f3e61-2af8-4423-b25d-49eff5be6eca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d71c6d7a729d8912afb07b060eabbf855c5550fb3b7a33a56130972e44bfb22,PodSandboxId:153d3f5051ee4e5adedac7201ac8868aa55e290f3486e7908290a4e09644c362,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729688127068274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zbds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361fec9-d03c-4e9f-9d14-36
dbf0570f9c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ab40e16aaa5b7de76fc8fbedd303f0e7a71cf4fd17742ee8e5e5dca9d3b340,PodSandboxId:fc7f35083e8c2eb9eb159288bc0f3870f4e429f9aa7e717fc27d5395e96af5fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727729684311654470,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f1bc0dbecf7c792ed8c8ec3c18fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f88b10b033fc76252895b64cb10334e3458f59b362f4b51c26311b5d3f38496,PodSandboxId:7a6e549d59e8d73b7115363ff4c00d39a064230ea11d525d776441d8424726e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727729684322768
678,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af905448cbe4d30a92601b16cf788ce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd82d2044f507e83e7eacccce870673b3432ce2d96e92bc161cc4f1ccb0c8aaa,PodSandboxId:a51d8d87b9c2d8a47c4a4eb2fd86b49ca2ade90047ee091cc78e4c77142910c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727729684292
385780,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b504e2dca6457ee727177271c36cfed5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:337d11ebb704a8c584300a10c8fd8cb87ddf6e4dbd1208706e6a4c3beceb7507,PodSandboxId:0b7ad6ec595c3efc6d90b5c45675d594d01a1e46056d2430807afb400f694682,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727729684280890954,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-810093,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b752add53ea2542f088c77bdf747b8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=814c6d1b-fc61-4f2d-a370-099f6dad0a71 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1227b08909e6c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago       Running             kube-proxy                2                   d11e13369eb27       kube-proxy-pqlrj
	acfbb51c55d06       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   590926c7c6c82       storage-provisioner
	0c67e2828d2ce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   35fd41de47728       coredns-7c65d6cfc9-cvg77
	3d71c6d7a729d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   153d3f5051ee4       coredns-7c65d6cfc9-zbds7
	9f88b10b033fc       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago       Running             kube-controller-manager   2                   7a6e549d59e8d       kube-controller-manager-kubernetes-upgrade-810093
	a7ab40e16aaa5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            2                   fc7f35083e8c2       kube-scheduler-kubernetes-upgrade-810093
	dd82d2044f507       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   a51d8d87b9c2d       etcd-kubernetes-upgrade-810093
	337d11ebb704a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago       Running             kube-apiserver            2                   0b7ad6ec595c3       kube-apiserver-kubernetes-upgrade-810093
	c50a6855e5a06       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   20 seconds ago      Exited              coredns                   1                   35fd41de47728       coredns-7c65d6cfc9-cvg77
	001a67edfd9f5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   22 seconds ago      Exited              coredns                   1                   67c65fe44c514       coredns-7c65d6cfc9-zbds7
	5fbaf5340219d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   23 seconds ago      Exited              kube-proxy                1                   83857159f6c07       kube-proxy-pqlrj
	67efcceedcc19       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago      Exited              etcd                      1                   a7c3af13eedf0       etcd-kubernetes-upgrade-810093
	8a55eb55be794       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   23 seconds ago      Exited              kube-controller-manager   1                   7553f301e918e       kube-controller-manager-kubernetes-upgrade-810093
	7a5f2f3caa659       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   23 seconds ago      Exited              kube-scheduler            1                   92e1fdee43b11       kube-scheduler-kubernetes-upgrade-810093
	7b10de2fc402b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   23 seconds ago      Exited              kube-apiserver            1                   3445607a77cce       kube-apiserver-kubernetes-upgrade-810093
	ebecc3e71f400       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   23 seconds ago      Exited              storage-provisioner       1                   e2e2e66615452       storage-provisioner
	
	
	==> coredns [001a67edfd9f5a38b0b3e7f99a5d666ab33b2bdeaa3c2dea2d16622669526006] <==
	
	
	==> coredns [0c67e2828d2cef6c6b9197c0c244f7f9275034c6d9b30c7479dd785b16630272] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [3d71c6d7a729d8912afb07b060eabbf855c5550fb3b7a33a56130972e44bfb22] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c50a6855e5a06b82abda38e7eee5fddd157ea694fed5ff2e3ae6a698d2e775aa] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-810093
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-810093
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:54:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-810093
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:54:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:54:47 +0000   Mon, 30 Sep 2024 20:54:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:54:47 +0000   Mon, 30 Sep 2024 20:54:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:54:47 +0000   Mon, 30 Sep 2024 20:54:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:54:47 +0000   Mon, 30 Sep 2024 20:54:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    kubernetes-upgrade-810093
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b472a0a66c24c798b7307d15feb9971
	  System UUID:                4b472a0a-66c2-4c79-8b73-07d15feb9971
	  Boot ID:                    c757f6ed-3718-4036-b7f1-7d9c9e337b57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-cvg77                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     38s
	  kube-system                 coredns-7c65d6cfc9-zbds7                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     38s
	  kube-system                 etcd-kubernetes-upgrade-810093                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         39s
	  kube-system                 kube-apiserver-kubernetes-upgrade-810093             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-810093    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-pqlrj                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-scheduler-kubernetes-upgrade-810093             100m (5%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 36s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  50s (x8 over 53s)  kubelet          Node kubernetes-upgrade-810093 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 53s)  kubelet          Node kubernetes-upgrade-810093 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x7 over 53s)  kubelet          Node kubernetes-upgrade-810093 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node kubernetes-upgrade-810093 event: Registered Node kubernetes-upgrade-810093 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-810093 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-810093 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-810093 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-810093 event: Registered Node kubernetes-upgrade-810093 in Controller
	
	
	==> dmesg <==
	[  +1.552901] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.416666] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.063015] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057454] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.214525] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.145832] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.327146] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +3.854182] systemd-fstab-generator[718]: Ignoring "noauto" option for root device
	[  +1.947561] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.073180] kauditd_printk_skb: 158 callbacks suppressed
	[Sep30 20:54] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.190063] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[ +12.948574] kauditd_printk_skb: 99 callbacks suppressed
	[  +0.068030] systemd-fstab-generator[2198]: Ignoring "noauto" option for root device
	[  +0.244528] systemd-fstab-generator[2297]: Ignoring "noauto" option for root device
	[  +0.340654] systemd-fstab-generator[2462]: Ignoring "noauto" option for root device
	[  +0.322924] systemd-fstab-generator[2583]: Ignoring "noauto" option for root device
	[  +0.656416] systemd-fstab-generator[2870]: Ignoring "noauto" option for root device
	[  +1.272051] systemd-fstab-generator[3222]: Ignoring "noauto" option for root device
	[ +12.865902] systemd-fstab-generator[3876]: Ignoring "noauto" option for root device
	[  +0.094531] kauditd_printk_skb: 301 callbacks suppressed
	[  +5.168899] kauditd_printk_skb: 62 callbacks suppressed
	[  +0.526008] systemd-fstab-generator[4412]: Ignoring "noauto" option for root device
	
	
	==> etcd [67efcceedcc1938ff6f66b0ea6cc17e34bda9faafb53db30f477c06b6e67ce7e] <==
	{"level":"warn","ts":"2024-09-30T20:54:29.551329Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-09-30T20:54:29.566323Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.233:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.233:2380","--initial-cluster=kubernetes-upgrade-810093=https://192.168.39.233:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.233:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.233:2380","--name=kubernetes-upgrade-810093","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--sna
pshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-09-30T20:54:29.566475Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-09-30T20:54:29.566506Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-09-30T20:54:29.566517Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.233:2380"]}
	{"level":"info","ts":"2024-09-30T20:54:29.566566Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-30T20:54:29.603083Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.233:2379"]}
	{"level":"info","ts":"2024-09-30T20:54:29.603254Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-810093","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.233:2380"],"listen-peer-urls":["https://192.168.39.233:2380"],"advertise-client-urls":["https://192.168.39.233:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.233:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new
","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-09-30T20:54:29.640905Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"37.357535ms"}
	
	
	==> etcd [dd82d2044f507e83e7eacccce870673b3432ce2d96e92bc161cc4f1ccb0c8aaa] <==
	{"level":"info","ts":"2024-09-30T20:54:44.819867Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"30d9b598be045872","local-member-id":"678e262213f11973","added-peer-id":"678e262213f11973","added-peer-peer-urls":["https://192.168.39.233:2380"]}
	{"level":"info","ts":"2024-09-30T20:54:44.823099Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"30d9b598be045872","local-member-id":"678e262213f11973","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T20:54:44.823209Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T20:54:44.829773Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:54:44.831625Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-30T20:54:44.837551Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"678e262213f11973","initial-advertise-peer-urls":["https://192.168.39.233:2380"],"listen-peer-urls":["https://192.168.39.233:2380"],"advertise-client-urls":["https://192.168.39.233:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.233:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-30T20:54:44.837627Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-30T20:54:44.837686Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-09-30T20:54:44.837709Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-09-30T20:54:46.070207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T20:54:46.070325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T20:54:46.070374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 received MsgPreVoteResp from 678e262213f11973 at term 2"}
	{"level":"info","ts":"2024-09-30T20:54:46.070441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became candidate at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:46.070468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 received MsgVoteResp from 678e262213f11973 at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:46.070496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became leader at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:46.070521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 678e262213f11973 elected leader 678e262213f11973 at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:46.076686Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"678e262213f11973","local-member-attributes":"{Name:kubernetes-upgrade-810093 ClientURLs:[https://192.168.39.233:2379]}","request-path":"/0/members/678e262213f11973/attributes","cluster-id":"30d9b598be045872","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T20:54:46.076702Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:54:46.076984Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T20:54:46.077074Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T20:54:46.076766Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:54:46.078558Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:54:46.079796Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.233:2379"}
	{"level":"info","ts":"2024-09-30T20:54:46.078555Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:54:46.080760Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:54:52 up 1 min,  0 users,  load average: 2.71, 0.74, 0.25
	Linux kubernetes-upgrade-810093 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [337d11ebb704a8c584300a10c8fd8cb87ddf6e4dbd1208706e6a4c3beceb7507] <==
	I0930 20:54:47.559667       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 20:54:47.561279       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 20:54:47.564755       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0930 20:54:47.564852       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0930 20:54:47.564872       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 20:54:47.565149       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 20:54:47.565470       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 20:54:47.583490       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 20:54:47.583528       1 policy_source.go:224] refreshing policies
	I0930 20:54:47.584922       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0930 20:54:47.585523       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0930 20:54:47.597297       1 aggregator.go:171] initial CRD sync complete...
	I0930 20:54:47.597325       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 20:54:47.597333       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 20:54:47.597339       1 cache.go:39] Caches are synced for autoregister controller
	I0930 20:54:47.606780       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0930 20:54:47.659633       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0930 20:54:48.473776       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0930 20:54:49.100936       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 20:54:49.136412       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 20:54:49.184370       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 20:54:49.238980       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0930 20:54:49.249096       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0930 20:54:51.235704       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 20:54:51.286670       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [7b10de2fc402b0e70a6a180c0350151ea8f75757a4a692274cbf1cc58d95e9b9] <==
	I0930 20:54:29.067341       1 options.go:228] external host was not specified, using 192.168.39.233
	I0930 20:54:29.070912       1 server.go:142] Version: v1.31.1
	I0930 20:54:29.070958       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0930 20:54:29.856759       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:29.856887       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0930 20:54:29.856939       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0930 20:54:29.877515       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 20:54:29.887795       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0930 20:54:29.887829       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0930 20:54:29.893176       1 instance.go:232] Using reconciler: lease
	W0930 20:54:29.895839       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [8a55eb55be7941855076f7edfd053f259d99b7260e868cee65a9ba01a4c35171] <==
	
	
	==> kube-controller-manager [9f88b10b033fc76252895b64cb10334e3458f59b362f4b51c26311b5d3f38496] <==
	I0930 20:54:50.832774       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-810093"
	I0930 20:54:50.832809       1 shared_informer.go:320] Caches are synced for ephemeral
	I0930 20:54:50.833125       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0930 20:54:50.834313       1 shared_informer.go:320] Caches are synced for deployment
	I0930 20:54:50.838462       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0930 20:54:50.842177       1 shared_informer.go:320] Caches are synced for cronjob
	I0930 20:54:50.844436       1 shared_informer.go:320] Caches are synced for disruption
	I0930 20:54:50.851114       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0930 20:54:50.858785       1 shared_informer.go:320] Caches are synced for stateful set
	I0930 20:54:50.859024       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0930 20:54:50.861104       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0930 20:54:50.862199       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0930 20:54:50.948339       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="178.755781ms"
	I0930 20:54:50.948607       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.348µs"
	I0930 20:54:50.956931       1 shared_informer.go:320] Caches are synced for namespace
	I0930 20:54:51.007113       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0930 20:54:51.008416       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 20:54:51.024233       1 shared_informer.go:320] Caches are synced for endpoint
	I0930 20:54:51.030070       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0930 20:54:51.030187       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-810093"
	I0930 20:54:51.033630       1 shared_informer.go:320] Caches are synced for service account
	I0930 20:54:51.040246       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 20:54:51.479892       1 shared_informer.go:320] Caches are synced for garbage collector
	I0930 20:54:51.479925       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0930 20:54:51.502415       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [1227b08909e6c16d26b96aa632e46329c54c0e7632450797896d303b6c2fbfe5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:54:48.589696       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 20:54:48.603497       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.233"]
	E0930 20:54:48.603562       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:54:48.657116       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:54:48.657280       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:54:48.657367       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:54:48.662792       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:54:48.663473       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:54:48.663513       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:54:48.665531       1 config.go:199] "Starting service config controller"
	I0930 20:54:48.665590       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:54:48.665621       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:54:48.665627       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:54:48.666454       1 config.go:328] "Starting node config controller"
	I0930 20:54:48.666487       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:54:48.765879       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 20:54:48.765977       1 shared_informer.go:320] Caches are synced for service config
	I0930 20:54:48.767149       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [5fbaf5340219dd96a03e65f0db4a8f2425f3922473b49a407d6261c8b3081d6c] <==
	
	
	==> kube-scheduler [7a5f2f3caa6596bdeec851c98c42549151d6a8fa20b96b21a5a850b7c0e5424c] <==
	
	
	==> kube-scheduler [a7ab40e16aaa5b7de76fc8fbedd303f0e7a71cf4fd17742ee8e5e5dca9d3b340] <==
	I0930 20:54:45.299665       1 serving.go:386] Generated self-signed cert in-memory
	W0930 20:54:47.499689       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 20:54:47.499736       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 20:54:47.499746       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 20:54:47.499756       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 20:54:47.590506       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 20:54:47.593137       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:54:47.597436       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 20:54:47.597536       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 20:54:47.605776       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 20:54:47.605949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 20:54:47.697741       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 20:54:44 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:44.025092    3883 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/b504e2dca6457ee727177271c36cfed5-etcd-data\") pod \"etcd-kubernetes-upgrade-810093\" (UID: \"b504e2dca6457ee727177271c36cfed5\") " pod="kube-system/etcd-kubernetes-upgrade-810093"
	Sep 30 20:54:44 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:44.025107    3883 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b752add53ea2542f088c77bdf747b8d-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-810093\" (UID: \"8b752add53ea2542f088c77bdf747b8d\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-810093"
	Sep 30 20:54:44 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:44.025127    3883 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af905448cbe4d30a92601b16cf788ce1-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-810093\" (UID: \"af905448cbe4d30a92601b16cf788ce1\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-810093"
	Sep 30 20:54:44 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:44.225622    3883 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-810093"
	Sep 30 20:54:44 kubernetes-upgrade-810093 kubelet[3883]: E0930 20:54:44.226444    3883 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.233:8443: connect: connection refused" node="kubernetes-upgrade-810093"
	Sep 30 20:54:44 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:44.263943    3883 scope.go:117] "RemoveContainer" containerID="67efcceedcc1938ff6f66b0ea6cc17e34bda9faafb53db30f477c06b6e67ce7e"
	Sep 30 20:54:44 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:44.264698    3883 scope.go:117] "RemoveContainer" containerID="7b10de2fc402b0e70a6a180c0350151ea8f75757a4a692274cbf1cc58d95e9b9"
	Sep 30 20:54:44 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:44.266032    3883 scope.go:117] "RemoveContainer" containerID="8a55eb55be7941855076f7edfd053f259d99b7260e868cee65a9ba01a4c35171"
	Sep 30 20:54:44 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:44.266555    3883 scope.go:117] "RemoveContainer" containerID="7a5f2f3caa6596bdeec851c98c42549151d6a8fa20b96b21a5a850b7c0e5424c"
	Sep 30 20:54:44 kubernetes-upgrade-810093 kubelet[3883]: E0930 20:54:44.424924    3883 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-810093?timeout=10s\": dial tcp 192.168.39.233:8443: connect: connection refused" interval="800ms"
	Sep 30 20:54:44 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:44.628223    3883 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-810093"
	Sep 30 20:54:47 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:47.660743    3883 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-810093"
	Sep 30 20:54:47 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:47.660871    3883 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-810093"
	Sep 30 20:54:47 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:47.660919    3883 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 30 20:54:47 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:47.662382    3883 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 30 20:54:47 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:47.799702    3883 apiserver.go:52] "Watching apiserver"
	Sep 30 20:54:47 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:47.818433    3883 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 30 20:54:47 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:47.913966    3883 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af352ae5-8860-4b35-9088-6f99aeb0db9d-xtables-lock\") pod \"kube-proxy-pqlrj\" (UID: \"af352ae5-8860-4b35-9088-6f99aeb0db9d\") " pod="kube-system/kube-proxy-pqlrj"
	Sep 30 20:54:47 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:47.915125    3883 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af352ae5-8860-4b35-9088-6f99aeb0db9d-lib-modules\") pod \"kube-proxy-pqlrj\" (UID: \"af352ae5-8860-4b35-9088-6f99aeb0db9d\") " pod="kube-system/kube-proxy-pqlrj"
	Sep 30 20:54:47 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:47.915252    3883 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a33c33cb-48f8-4fa9-a8dd-95a6451bcd3c-tmp\") pod \"storage-provisioner\" (UID: \"a33c33cb-48f8-4fa9-a8dd-95a6451bcd3c\") " pod="kube-system/storage-provisioner"
	Sep 30 20:54:48 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:48.104675    3883 scope.go:117] "RemoveContainer" containerID="ebecc3e71f4007d0365b4c87543b72b9d2c5f5ba7d29835a0ce6761e88986df4"
	Sep 30 20:54:48 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:48.105199    3883 scope.go:117] "RemoveContainer" containerID="001a67edfd9f5a38b0b3e7f99a5d666ab33b2bdeaa3c2dea2d16622669526006"
	Sep 30 20:54:48 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:48.105585    3883 scope.go:117] "RemoveContainer" containerID="c50a6855e5a06b82abda38e7eee5fddd157ea694fed5ff2e3ae6a698d2e775aa"
	Sep 30 20:54:48 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:48.105947    3883 scope.go:117] "RemoveContainer" containerID="5fbaf5340219dd96a03e65f0db4a8f2425f3922473b49a407d6261c8b3081d6c"
	Sep 30 20:54:50 kubernetes-upgrade-810093 kubelet[3883]: I0930 20:54:50.057910    3883 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [acfbb51c55d06761c975af0cdc9354c8f2117fa22de609480d89b0e1fdff2659] <==
	I0930 20:54:48.387884       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 20:54:48.415253       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 20:54:48.415489       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [ebecc3e71f4007d0365b4c87543b72b9d2c5f5ba7d29835a0ce6761e88986df4] <==
	I0930 20:54:28.661573       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0930 20:54:28.665236       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-810093 -n kubernetes-upgrade-810093
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-810093 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-810093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-810093
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-810093: (1.191219984s)
--- FAIL: TestKubernetesUpgrade (438.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (76.77s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-617008 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-617008 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.341310835s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-617008] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-617008" primary control-plane node in "pause-617008" cluster
	* Updating the running kvm2 "pause-617008" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-617008" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 20:53:33.366392   58154 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:53:33.366542   58154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:53:33.366551   58154 out.go:358] Setting ErrFile to fd 2...
	I0930 20:53:33.366555   58154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:53:33.366764   58154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:53:33.367310   58154 out.go:352] Setting JSON to false
	I0930 20:53:33.368355   58154 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5756,"bootTime":1727723857,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 20:53:33.368417   58154 start.go:139] virtualization: kvm guest
	I0930 20:53:33.370838   58154 out.go:177] * [pause-617008] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 20:53:33.372703   58154 notify.go:220] Checking for updates...
	I0930 20:53:33.373219   58154 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 20:53:33.374747   58154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 20:53:33.376346   58154 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:53:33.377962   58154 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:53:33.379644   58154 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 20:53:33.381205   58154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 20:53:33.383283   58154 config.go:182] Loaded profile config "pause-617008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:53:33.384019   58154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:53:33.384118   58154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:53:33.402335   58154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40677
	I0930 20:53:33.402858   58154 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:53:33.403615   58154 main.go:141] libmachine: Using API Version  1
	I0930 20:53:33.403635   58154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:53:33.404057   58154 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:53:33.404260   58154 main.go:141] libmachine: (pause-617008) Calling .DriverName
	I0930 20:53:33.404551   58154 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 20:53:33.404897   58154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:53:33.404937   58154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:53:33.424375   58154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I0930 20:53:33.424779   58154 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:53:33.425462   58154 main.go:141] libmachine: Using API Version  1
	I0930 20:53:33.425494   58154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:53:33.425872   58154 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:53:33.426017   58154 main.go:141] libmachine: (pause-617008) Calling .DriverName
	I0930 20:53:33.469251   58154 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 20:53:33.470624   58154 start.go:297] selected driver: kvm2
	I0930 20:53:33.470645   58154 start.go:901] validating driver "kvm2" against &{Name:pause-617008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:pause-617008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:53:33.470816   58154 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 20:53:33.471291   58154 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:53:33.471404   58154 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 20:53:33.492337   58154 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 20:53:33.493542   58154 cni.go:84] Creating CNI manager for ""
	I0930 20:53:33.493624   58154 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:53:33.493714   58154 start.go:340] cluster config:
	{Name:pause-617008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-617008 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-ali
ases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:53:33.493920   58154 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:53:33.496113   58154 out.go:177] * Starting "pause-617008" primary control-plane node in "pause-617008" cluster
	I0930 20:53:33.497509   58154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:53:33.497579   58154 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 20:53:33.497617   58154 cache.go:56] Caching tarball of preloaded images
	I0930 20:53:33.497758   58154 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:53:33.497774   58154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:53:33.497940   58154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/pause-617008/config.json ...
	I0930 20:53:33.498198   58154 start.go:360] acquireMachinesLock for pause-617008: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:53:52.252936   58154 start.go:364] duration metric: took 18.75468419s to acquireMachinesLock for "pause-617008"
	I0930 20:53:52.252993   58154 start.go:96] Skipping create...Using existing machine configuration
	I0930 20:53:52.253004   58154 fix.go:54] fixHost starting: 
	I0930 20:53:52.253417   58154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:53:52.253481   58154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:53:52.272146   58154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38179
	I0930 20:53:52.272661   58154 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:53:52.273286   58154 main.go:141] libmachine: Using API Version  1
	I0930 20:53:52.273313   58154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:53:52.273695   58154 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:53:52.273885   58154 main.go:141] libmachine: (pause-617008) Calling .DriverName
	I0930 20:53:52.274032   58154 main.go:141] libmachine: (pause-617008) Calling .GetState
	I0930 20:53:52.275926   58154 fix.go:112] recreateIfNeeded on pause-617008: state=Running err=<nil>
	W0930 20:53:52.275949   58154 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 20:53:52.277961   58154 out.go:177] * Updating the running kvm2 "pause-617008" VM ...
	I0930 20:53:52.279328   58154 machine.go:93] provisionDockerMachine start ...
	I0930 20:53:52.279358   58154 main.go:141] libmachine: (pause-617008) Calling .DriverName
	I0930 20:53:52.279612   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHHostname
	I0930 20:53:52.282459   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:52.282930   58154 main.go:141] libmachine: (pause-617008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:66:73", ip: ""} in network mk-pause-617008: {Iface:virbr2 ExpiryTime:2024-09-30 21:52:49 +0000 UTC Type:0 Mac:52:54:00:1c:66:73 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:pause-617008 Clientid:01:52:54:00:1c:66:73}
	I0930 20:53:52.282966   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined IP address 192.168.61.245 and MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:52.283175   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHPort
	I0930 20:53:52.283347   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:52.283503   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:52.283627   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHUsername
	I0930 20:53:52.283770   58154 main.go:141] libmachine: Using SSH client type: native
	I0930 20:53:52.284019   58154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0930 20:53:52.284034   58154 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 20:53:52.405051   58154 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-617008
	
	I0930 20:53:52.405082   58154 main.go:141] libmachine: (pause-617008) Calling .GetMachineName
	I0930 20:53:52.405360   58154 buildroot.go:166] provisioning hostname "pause-617008"
	I0930 20:53:52.405382   58154 main.go:141] libmachine: (pause-617008) Calling .GetMachineName
	I0930 20:53:52.405563   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHHostname
	I0930 20:53:52.408948   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:52.409377   58154 main.go:141] libmachine: (pause-617008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:66:73", ip: ""} in network mk-pause-617008: {Iface:virbr2 ExpiryTime:2024-09-30 21:52:49 +0000 UTC Type:0 Mac:52:54:00:1c:66:73 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:pause-617008 Clientid:01:52:54:00:1c:66:73}
	I0930 20:53:52.409411   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined IP address 192.168.61.245 and MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:52.409672   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHPort
	I0930 20:53:52.409882   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:52.410041   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:52.410210   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHUsername
	I0930 20:53:52.410385   58154 main.go:141] libmachine: Using SSH client type: native
	I0930 20:53:52.410629   58154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0930 20:53:52.410646   58154 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-617008 && echo "pause-617008" | sudo tee /etc/hostname
	I0930 20:53:52.544422   58154 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-617008
	
	I0930 20:53:52.544452   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHHostname
	I0930 20:53:52.547755   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:52.548026   58154 main.go:141] libmachine: (pause-617008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:66:73", ip: ""} in network mk-pause-617008: {Iface:virbr2 ExpiryTime:2024-09-30 21:52:49 +0000 UTC Type:0 Mac:52:54:00:1c:66:73 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:pause-617008 Clientid:01:52:54:00:1c:66:73}
	I0930 20:53:52.548064   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined IP address 192.168.61.245 and MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:52.548269   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHPort
	I0930 20:53:52.548471   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:52.548652   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:52.548849   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHUsername
	I0930 20:53:52.549031   58154 main.go:141] libmachine: Using SSH client type: native
	I0930 20:53:52.549272   58154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0930 20:53:52.549299   58154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-617008' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-617008/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-617008' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:53:52.656517   58154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:53:52.656552   58154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:53:52.656574   58154 buildroot.go:174] setting up certificates
	I0930 20:53:52.656584   58154 provision.go:84] configureAuth start
	I0930 20:53:52.656596   58154 main.go:141] libmachine: (pause-617008) Calling .GetMachineName
	I0930 20:53:52.656880   58154 main.go:141] libmachine: (pause-617008) Calling .GetIP
	I0930 20:53:52.659654   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:52.660084   58154 main.go:141] libmachine: (pause-617008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:66:73", ip: ""} in network mk-pause-617008: {Iface:virbr2 ExpiryTime:2024-09-30 21:52:49 +0000 UTC Type:0 Mac:52:54:00:1c:66:73 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:pause-617008 Clientid:01:52:54:00:1c:66:73}
	I0930 20:53:52.660109   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined IP address 192.168.61.245 and MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:52.660361   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHHostname
	I0930 20:53:52.663091   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:52.663546   58154 main.go:141] libmachine: (pause-617008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:66:73", ip: ""} in network mk-pause-617008: {Iface:virbr2 ExpiryTime:2024-09-30 21:52:49 +0000 UTC Type:0 Mac:52:54:00:1c:66:73 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:pause-617008 Clientid:01:52:54:00:1c:66:73}
	I0930 20:53:52.663572   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined IP address 192.168.61.245 and MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:52.663740   58154 provision.go:143] copyHostCerts
	I0930 20:53:52.663802   58154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:53:52.663815   58154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:53:52.663904   58154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:53:52.664031   58154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:53:52.664044   58154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:53:52.664085   58154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:53:52.664167   58154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:53:52.664180   58154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:53:52.664218   58154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:53:52.664345   58154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.pause-617008 san=[127.0.0.1 192.168.61.245 localhost minikube pause-617008]
	I0930 20:53:52.941248   58154 provision.go:177] copyRemoteCerts
	I0930 20:53:52.941319   58154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:53:52.941346   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHHostname
	I0930 20:53:52.944788   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:52.945202   58154 main.go:141] libmachine: (pause-617008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:66:73", ip: ""} in network mk-pause-617008: {Iface:virbr2 ExpiryTime:2024-09-30 21:52:49 +0000 UTC Type:0 Mac:52:54:00:1c:66:73 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:pause-617008 Clientid:01:52:54:00:1c:66:73}
	I0930 20:53:52.945230   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined IP address 192.168.61.245 and MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:52.945430   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHPort
	I0930 20:53:52.945631   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:52.945765   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHUsername
	I0930 20:53:52.945939   58154 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/pause-617008/id_rsa Username:docker}
	I0930 20:53:53.042747   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:53:53.073285   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0930 20:53:53.106376   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 20:53:53.139201   58154 provision.go:87] duration metric: took 482.596805ms to configureAuth
	I0930 20:53:53.139240   58154 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:53:53.139574   58154 config.go:182] Loaded profile config "pause-617008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:53:53.139706   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHHostname
	I0930 20:53:53.142957   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:53.143401   58154 main.go:141] libmachine: (pause-617008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:66:73", ip: ""} in network mk-pause-617008: {Iface:virbr2 ExpiryTime:2024-09-30 21:52:49 +0000 UTC Type:0 Mac:52:54:00:1c:66:73 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:pause-617008 Clientid:01:52:54:00:1c:66:73}
	I0930 20:53:53.143430   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined IP address 192.168.61.245 and MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:53.143682   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHPort
	I0930 20:53:53.143888   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:53.144072   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:53.144203   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHUsername
	I0930 20:53:53.144423   58154 main.go:141] libmachine: Using SSH client type: native
	I0930 20:53:53.144635   58154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0930 20:53:53.144652   58154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:53:58.661155   58154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:53:58.661185   58154 machine.go:96] duration metric: took 6.381838913s to provisionDockerMachine
	I0930 20:53:58.661199   58154 start.go:293] postStartSetup for "pause-617008" (driver="kvm2")
	I0930 20:53:58.661210   58154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:53:58.661229   58154 main.go:141] libmachine: (pause-617008) Calling .DriverName
	I0930 20:53:58.661577   58154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:53:58.661610   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHHostname
	I0930 20:53:58.664391   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:58.664765   58154 main.go:141] libmachine: (pause-617008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:66:73", ip: ""} in network mk-pause-617008: {Iface:virbr2 ExpiryTime:2024-09-30 21:52:49 +0000 UTC Type:0 Mac:52:54:00:1c:66:73 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:pause-617008 Clientid:01:52:54:00:1c:66:73}
	I0930 20:53:58.664794   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined IP address 192.168.61.245 and MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:58.665012   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHPort
	I0930 20:53:58.665199   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:58.665368   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHUsername
	I0930 20:53:58.665491   58154 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/pause-617008/id_rsa Username:docker}
	I0930 20:53:58.746211   58154 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:53:58.750295   58154 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:53:58.750321   58154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:53:58.750400   58154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:53:58.750495   58154 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:53:58.750616   58154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:53:58.760313   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:53:58.783186   58154 start.go:296] duration metric: took 121.974331ms for postStartSetup
	I0930 20:53:58.783227   58154 fix.go:56] duration metric: took 6.530222152s for fixHost
	I0930 20:53:58.783252   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHHostname
	I0930 20:53:58.785915   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:58.786212   58154 main.go:141] libmachine: (pause-617008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:66:73", ip: ""} in network mk-pause-617008: {Iface:virbr2 ExpiryTime:2024-09-30 21:52:49 +0000 UTC Type:0 Mac:52:54:00:1c:66:73 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:pause-617008 Clientid:01:52:54:00:1c:66:73}
	I0930 20:53:58.786244   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined IP address 192.168.61.245 and MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:58.786431   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHPort
	I0930 20:53:58.786650   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:58.786826   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:58.786981   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHUsername
	I0930 20:53:58.787131   58154 main.go:141] libmachine: Using SSH client type: native
	I0930 20:53:58.787302   58154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.245 22 <nil> <nil>}
	I0930 20:53:58.787315   58154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:53:58.893095   58154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727729638.884867372
	
	I0930 20:53:58.893123   58154 fix.go:216] guest clock: 1727729638.884867372
	I0930 20:53:58.893132   58154 fix.go:229] Guest: 2024-09-30 20:53:58.884867372 +0000 UTC Remote: 2024-09-30 20:53:58.783232541 +0000 UTC m=+25.468079275 (delta=101.634831ms)
	I0930 20:53:58.893169   58154 fix.go:200] guest clock delta is within tolerance: 101.634831ms
	I0930 20:53:58.893174   58154 start.go:83] releasing machines lock for "pause-617008", held for 6.640205452s
	I0930 20:53:58.893194   58154 main.go:141] libmachine: (pause-617008) Calling .DriverName
	I0930 20:53:58.893481   58154 main.go:141] libmachine: (pause-617008) Calling .GetIP
	I0930 20:53:58.896225   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:58.896569   58154 main.go:141] libmachine: (pause-617008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:66:73", ip: ""} in network mk-pause-617008: {Iface:virbr2 ExpiryTime:2024-09-30 21:52:49 +0000 UTC Type:0 Mac:52:54:00:1c:66:73 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:pause-617008 Clientid:01:52:54:00:1c:66:73}
	I0930 20:53:58.896597   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined IP address 192.168.61.245 and MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:58.896775   58154 main.go:141] libmachine: (pause-617008) Calling .DriverName
	I0930 20:53:58.897254   58154 main.go:141] libmachine: (pause-617008) Calling .DriverName
	I0930 20:53:58.897460   58154 main.go:141] libmachine: (pause-617008) Calling .DriverName
	I0930 20:53:58.897558   58154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:53:58.897615   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHHostname
	I0930 20:53:58.897720   58154 ssh_runner.go:195] Run: cat /version.json
	I0930 20:53:58.897741   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHHostname
	I0930 20:53:58.900624   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:58.900755   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:58.901021   58154 main.go:141] libmachine: (pause-617008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:66:73", ip: ""} in network mk-pause-617008: {Iface:virbr2 ExpiryTime:2024-09-30 21:52:49 +0000 UTC Type:0 Mac:52:54:00:1c:66:73 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:pause-617008 Clientid:01:52:54:00:1c:66:73}
	I0930 20:53:58.901047   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined IP address 192.168.61.245 and MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:58.901181   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHPort
	I0930 20:53:58.901222   58154 main.go:141] libmachine: (pause-617008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:66:73", ip: ""} in network mk-pause-617008: {Iface:virbr2 ExpiryTime:2024-09-30 21:52:49 +0000 UTC Type:0 Mac:52:54:00:1c:66:73 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:pause-617008 Clientid:01:52:54:00:1c:66:73}
	I0930 20:53:58.901265   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined IP address 192.168.61.245 and MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:53:58.901380   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:58.901390   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHPort
	I0930 20:53:58.901520   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHUsername
	I0930 20:53:58.901596   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHKeyPath
	I0930 20:53:58.901675   58154 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/pause-617008/id_rsa Username:docker}
	I0930 20:53:58.901712   58154 main.go:141] libmachine: (pause-617008) Calling .GetSSHUsername
	I0930 20:53:58.901826   58154 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/pause-617008/id_rsa Username:docker}
	I0930 20:53:58.980395   58154 ssh_runner.go:195] Run: systemctl --version
	I0930 20:53:59.014798   58154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:53:59.174590   58154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:53:59.180361   58154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:53:59.180436   58154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:53:59.189859   58154 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 20:53:59.189908   58154 start.go:495] detecting cgroup driver to use...
	I0930 20:53:59.189978   58154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:53:59.208177   58154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:53:59.227421   58154 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:53:59.227492   58154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:53:59.246174   58154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:53:59.264716   58154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:53:59.413290   58154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:53:59.582927   58154 docker.go:233] disabling docker service ...
	I0930 20:53:59.583008   58154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:53:59.605948   58154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:53:59.623987   58154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:53:59.764444   58154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:53:59.900769   58154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:53:59.915846   58154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:53:59.935051   58154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:53:59.935120   58154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:53:59.945340   58154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:53:59.945415   58154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:53:59.955984   58154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:53:59.966348   58154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:53:59.976813   58154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:53:59.988539   58154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:53:59.999468   58154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:00.010889   58154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:00.021116   58154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:54:00.031091   58154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:54:00.040717   58154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:54:00.165907   58154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:54:01.299823   58154 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.133872539s)
	I0930 20:54:01.299864   58154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:54:01.299936   58154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:54:01.306558   58154 start.go:563] Will wait 60s for crictl version
	I0930 20:54:01.306612   58154 ssh_runner.go:195] Run: which crictl
	I0930 20:54:01.310227   58154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:54:01.344236   58154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:54:01.344308   58154 ssh_runner.go:195] Run: crio --version
	I0930 20:54:01.373382   58154 ssh_runner.go:195] Run: crio --version
	I0930 20:54:01.405434   58154 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:54:01.407000   58154 main.go:141] libmachine: (pause-617008) Calling .GetIP
	I0930 20:54:01.409803   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:54:01.410164   58154 main.go:141] libmachine: (pause-617008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:66:73", ip: ""} in network mk-pause-617008: {Iface:virbr2 ExpiryTime:2024-09-30 21:52:49 +0000 UTC Type:0 Mac:52:54:00:1c:66:73 Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:pause-617008 Clientid:01:52:54:00:1c:66:73}
	I0930 20:54:01.410191   58154 main.go:141] libmachine: (pause-617008) DBG | domain pause-617008 has defined IP address 192.168.61.245 and MAC address 52:54:00:1c:66:73 in network mk-pause-617008
	I0930 20:54:01.410452   58154 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0930 20:54:01.414687   58154 kubeadm.go:883] updating cluster {Name:pause-617008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-617008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-se
curity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 20:54:01.414853   58154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:54:01.414909   58154 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:54:01.463639   58154 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:54:01.463669   58154 crio.go:433] Images already preloaded, skipping extraction
	I0930 20:54:01.463738   58154 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:54:01.499537   58154 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:54:01.499572   58154 cache_images.go:84] Images are preloaded, skipping loading
	I0930 20:54:01.499581   58154 kubeadm.go:934] updating node { 192.168.61.245 8443 v1.31.1 crio true true} ...
	I0930 20:54:01.499703   58154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-617008 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-617008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:54:01.499780   58154 ssh_runner.go:195] Run: crio config
	I0930 20:54:01.551390   58154 cni.go:84] Creating CNI manager for ""
	I0930 20:54:01.551416   58154 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:54:01.551427   58154 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 20:54:01.551446   58154 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.245 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-617008 NodeName:pause-617008 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 20:54:01.551615   58154 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-617008"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 20:54:01.551676   58154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:54:01.562033   58154 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 20:54:01.562117   58154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 20:54:01.571773   58154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0930 20:54:01.597772   58154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:54:01.658388   58154 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0930 20:54:01.695346   58154 ssh_runner.go:195] Run: grep 192.168.61.245	control-plane.minikube.internal$ /etc/hosts
	I0930 20:54:01.705284   58154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:54:01.972498   58154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:54:02.069708   58154 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/pause-617008 for IP: 192.168.61.245
	I0930 20:54:02.069732   58154 certs.go:194] generating shared ca certs ...
	I0930 20:54:02.069752   58154 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:54:02.069937   58154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:54:02.069997   58154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:54:02.070011   58154 certs.go:256] generating profile certs ...
	I0930 20:54:02.070112   58154 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/pause-617008/client.key
	I0930 20:54:02.070183   58154 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/pause-617008/apiserver.key.3b352225
	I0930 20:54:02.070231   58154 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/pause-617008/proxy-client.key
	I0930 20:54:02.070379   58154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:54:02.070416   58154 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:54:02.070422   58154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:54:02.070455   58154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:54:02.070507   58154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:54:02.070547   58154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:54:02.070601   58154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:54:02.071215   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:54:02.119340   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:54:02.255983   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:54:02.332144   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:54:02.475160   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/pause-617008/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0930 20:54:02.515038   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/pause-617008/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 20:54:02.560447   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/pause-617008/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:54:02.593760   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/pause-617008/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 20:54:02.666847   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:54:02.736436   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:54:02.813566   58154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:54:02.869025   58154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 20:54:02.895187   58154 ssh_runner.go:195] Run: openssl version
	I0930 20:54:02.902205   58154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:54:02.919454   58154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:54:02.931816   58154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:54:02.931878   58154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:54:02.942763   58154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:54:02.956826   58154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:54:02.982904   58154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:54:02.987513   58154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:54:02.987608   58154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:54:02.994779   58154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:54:03.008825   58154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:54:03.035713   58154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:54:03.049214   58154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:54:03.049288   58154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:54:03.065523   58154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:54:03.079471   58154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:54:03.084928   58154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 20:54:03.093625   58154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 20:54:03.112619   58154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 20:54:03.119373   58154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 20:54:03.156442   58154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 20:54:03.162979   58154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 20:54:03.170716   58154 kubeadm.go:392] StartCluster: {Name:pause-617008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-617008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secur
ity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:54:03.170878   58154 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 20:54:03.170955   58154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 20:54:03.269589   58154 cri.go:89] found id: "d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96"
	I0930 20:54:03.269621   58154 cri.go:89] found id: "6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530"
	I0930 20:54:03.269628   58154 cri.go:89] found id: "6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f"
	I0930 20:54:03.269634   58154 cri.go:89] found id: "a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075"
	I0930 20:54:03.269639   58154 cri.go:89] found id: "9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed"
	I0930 20:54:03.269645   58154 cri.go:89] found id: "9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0"
	I0930 20:54:03.269651   58154 cri.go:89] found id: "3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef"
	I0930 20:54:03.269656   58154 cri.go:89] found id: "5969820fe0736a00a318e9b88b2319ea55d502b90436a825975a302c6173eabb"
	I0930 20:54:03.269661   58154 cri.go:89] found id: "1c05a5808f1c5162b4edabbe09b054deaf4e4132d32fb5043eecce39de72d79d"
	I0930 20:54:03.269671   58154 cri.go:89] found id: "41abecf47c6f06d0bac09a7486a88e10860a7927ffef9d17eb26914150612dff"
	I0930 20:54:03.269681   58154 cri.go:89] found id: "5cd7542053b678d02c4e06433340f520eb5c20afae4506e70b38a99eec3440ca"
	I0930 20:54:03.269686   58154 cri.go:89] found id: "aff264368368eaa98382498c90eb66582bf30743b58a08cb766ca025cf1abe7e"
	I0930 20:54:03.269698   58154 cri.go:89] found id: ""
	I0930 20:54:03.269763   58154 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-617008 -n pause-617008
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-617008 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-617008 logs -n 25: (1.379180967s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-592556                | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC | 30 Sep 24 20:50 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-456540             | running-upgrade-456540    | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC | 30 Sep 24 20:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-241258             | stopped-upgrade-241258    | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC | 30 Sep 24 20:50 UTC |
	| ssh     | -p NoKubernetes-592556 sudo           | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-592556                | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC | 30 Sep 24 20:50 UTC |
	| start   | -p cert-options-280515                | cert-options-280515       | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC | 30 Sep 24 20:51 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-592556                | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC | 30 Sep 24 20:51 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-280515 ssh               | cert-options-280515       | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:51 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-280515 -- sudo        | cert-options-280515       | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:51 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-280515                | cert-options-280515       | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:51 UTC |
	| start   | -p force-systemd-flag-188130          | force-systemd-flag-188130 | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:52 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-592556 sudo           | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-592556                | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:51 UTC |
	| start   | -p cert-expiration-988243             | cert-expiration-988243    | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:52 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-456540             | running-upgrade-456540    | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:51 UTC |
	| start   | -p pause-617008 --memory=2048         | pause-617008              | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:53 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-188130 ssh cat     | force-systemd-flag-188130 | jenkins | v1.34.0 | 30 Sep 24 20:52 UTC | 30 Sep 24 20:52 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-188130          | force-systemd-flag-188130 | jenkins | v1.34.0 | 30 Sep 24 20:52 UTC | 30 Sep 24 20:52 UTC |
	| start   | -p auto-207733 --memory=3072          | auto-207733               | jenkins | v1.34.0 | 30 Sep 24 20:52 UTC | 30 Sep 24 20:54 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-810093          | kubernetes-upgrade-810093 | jenkins | v1.34.0 | 30 Sep 24 20:53 UTC | 30 Sep 24 20:53 UTC |
	| start   | -p kubernetes-upgrade-810093          | kubernetes-upgrade-810093 | jenkins | v1.34.0 | 30 Sep 24 20:53 UTC | 30 Sep 24 20:54 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-617008                       | pause-617008              | jenkins | v1.34.0 | 30 Sep 24 20:53 UTC | 30 Sep 24 20:54 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-810093          | kubernetes-upgrade-810093 | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-810093          | kubernetes-upgrade-810093 | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-207733 pgrep -a               | auto-207733               | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC | 30 Sep 24 20:54 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 20:54:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 20:54:16.408370   58524 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:54:16.408487   58524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:54:16.408498   58524 out.go:358] Setting ErrFile to fd 2...
	I0930 20:54:16.408504   58524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:54:16.408716   58524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:54:16.409244   58524 out.go:352] Setting JSON to false
	I0930 20:54:16.410242   58524 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5799,"bootTime":1727723857,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 20:54:16.410344   58524 start.go:139] virtualization: kvm guest
	I0930 20:54:16.412177   58524 out.go:177] * [kubernetes-upgrade-810093] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 20:54:16.413449   58524 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 20:54:16.413477   58524 notify.go:220] Checking for updates...
	I0930 20:54:16.415972   58524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 20:54:16.417436   58524 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:54:16.418680   58524 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:54:16.420063   58524 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 20:54:16.421252   58524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 20:54:16.422871   58524 config.go:182] Loaded profile config "kubernetes-upgrade-810093": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:54:16.423248   58524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:54:16.423327   58524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:54:16.444261   58524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38439
	I0930 20:54:16.444804   58524 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:54:16.445431   58524 main.go:141] libmachine: Using API Version  1
	I0930 20:54:16.445454   58524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:54:16.445813   58524 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:54:16.446019   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:16.446279   58524 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 20:54:16.446639   58524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:54:16.446682   58524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:54:16.462674   58524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0930 20:54:16.463068   58524 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:54:16.463585   58524 main.go:141] libmachine: Using API Version  1
	I0930 20:54:16.463626   58524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:54:16.463949   58524 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:54:16.464112   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:16.501910   58524 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 20:54:16.503399   58524 start.go:297] selected driver: kvm2
	I0930 20:54:16.503411   58524 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-810093 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-810093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:54:16.503563   58524 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 20:54:16.504236   58524 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:54:16.504345   58524 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 20:54:16.519579   58524 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 20:54:16.520125   58524 cni.go:84] Creating CNI manager for ""
	I0930 20:54:16.520191   58524 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:54:16.520257   58524 start.go:340] cluster config:
	{Name:kubernetes-upgrade-810093 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-810093 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:54:16.520392   58524 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:54:16.522479   58524 out.go:177] * Starting "kubernetes-upgrade-810093" primary control-plane node in "kubernetes-upgrade-810093" cluster
	I0930 20:54:16.523985   58524 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:54:16.524029   58524 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 20:54:16.524039   58524 cache.go:56] Caching tarball of preloaded images
	I0930 20:54:16.524152   58524 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:54:16.524170   58524 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:54:16.524273   58524 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/config.json ...
	I0930 20:54:16.524463   58524 start.go:360] acquireMachinesLock for kubernetes-upgrade-810093: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:54:16.524517   58524 start.go:364] duration metric: took 31.247µs to acquireMachinesLock for "kubernetes-upgrade-810093"
	I0930 20:54:16.524536   58524 start.go:96] Skipping create...Using existing machine configuration
	I0930 20:54:16.524545   58524 fix.go:54] fixHost starting: 
	I0930 20:54:16.524819   58524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:54:16.524859   58524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:54:16.540027   58524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37757
	I0930 20:54:16.540555   58524 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:54:16.541130   58524 main.go:141] libmachine: Using API Version  1
	I0930 20:54:16.541162   58524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:54:16.541517   58524 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:54:16.541732   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:16.541886   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetState
	I0930 20:54:16.543781   58524 fix.go:112] recreateIfNeeded on kubernetes-upgrade-810093: state=Running err=<nil>
	W0930 20:54:16.543812   58524 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 20:54:16.545732   58524 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-810093" VM ...
	I0930 20:54:17.238822   57353 pod_ready.go:103] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:19.239781   57353 pod_ready.go:103] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:16.547009   58524 machine.go:93] provisionDockerMachine start ...
	I0930 20:54:16.547031   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:16.547247   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:16.550051   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.550531   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:16.550567   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.550708   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:16.550877   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:16.551015   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:16.551132   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:16.551313   58524 main.go:141] libmachine: Using SSH client type: native
	I0930 20:54:16.551524   58524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:54:16.551548   58524 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 20:54:16.681158   58524 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-810093
	
	I0930 20:54:16.681183   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetMachineName
	I0930 20:54:16.681411   58524 buildroot.go:166] provisioning hostname "kubernetes-upgrade-810093"
	I0930 20:54:16.681423   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetMachineName
	I0930 20:54:16.681567   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:16.684105   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.684427   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:16.684461   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.684701   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:16.684859   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:16.685002   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:16.685165   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:16.685296   58524 main.go:141] libmachine: Using SSH client type: native
	I0930 20:54:16.685515   58524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:54:16.685536   58524 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-810093 && echo "kubernetes-upgrade-810093" | sudo tee /etc/hostname
	I0930 20:54:16.821760   58524 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-810093
	
	I0930 20:54:16.821790   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:16.824746   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.825183   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:16.825216   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.825365   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:16.825555   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:16.825744   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:16.825891   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:16.826035   58524 main.go:141] libmachine: Using SSH client type: native
	I0930 20:54:16.826246   58524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:54:16.826264   58524 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-810093' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-810093/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-810093' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:54:16.940302   58524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:54:16.940343   58524 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:54:16.940381   58524 buildroot.go:174] setting up certificates
	I0930 20:54:16.940400   58524 provision.go:84] configureAuth start
	I0930 20:54:16.940420   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetMachineName
	I0930 20:54:16.940758   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetIP
	I0930 20:54:16.943681   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.944187   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:16.944217   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.944449   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:16.946752   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.947067   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:16.947097   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.947182   58524 provision.go:143] copyHostCerts
	I0930 20:54:16.947244   58524 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:54:16.947257   58524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:54:16.947324   58524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:54:16.947444   58524 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:54:16.947455   58524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:54:16.947486   58524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:54:16.947610   58524 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:54:16.947622   58524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:54:16.947654   58524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:54:16.947744   58524 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-810093 san=[127.0.0.1 192.168.39.233 kubernetes-upgrade-810093 localhost minikube]
	I0930 20:54:17.101285   58524 provision.go:177] copyRemoteCerts
	I0930 20:54:17.101372   58524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:54:17.101401   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:17.104029   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:17.104356   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:17.104378   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:17.104613   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:17.104790   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:17.104952   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:17.105064   58524 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa Username:docker}
	I0930 20:54:17.190481   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 20:54:17.214407   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:54:17.240048   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0930 20:54:17.262681   58524 provision.go:87] duration metric: took 322.262354ms to configureAuth
	I0930 20:54:17.262709   58524 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:54:17.262878   58524 config.go:182] Loaded profile config "kubernetes-upgrade-810093": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:54:17.262941   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:17.265672   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:17.266089   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:17.266117   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:17.266363   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:17.266578   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:17.266774   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:17.266934   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:17.267101   58524 main.go:141] libmachine: Using SSH client type: native
	I0930 20:54:17.267260   58524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:54:17.267275   58524 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:54:21.739351   57353 pod_ready.go:103] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:24.240665   57353 pod_ready.go:103] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:23.933599   58154 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96 6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530 6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075 9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed 9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0 3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef 5969820fe0736a00a318e9b88b2319ea55d502b90436a825975a302c6173eabb 1c05a5808f1c5162b4edabbe09b054deaf4e4132d32fb5043eecce39de72d79d 41abecf47c6f06d0bac09a7486a88e10860a7927ffef9d17eb26914150612dff 5cd7542053b678d02c4e06433340f520eb5c20afae4506e70b38a99eec3440ca aff264368368eaa98382498c90eb66582bf30743b58a08cb766ca025cf1abe7e: (20.506248045s)
	W0930 20:54:23.933689   58154 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96 6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530 6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075 9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed 9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0 3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef 5969820fe0736a00a318e9b88b2319ea55d502b90436a825975a302c6173eabb 1c05a5808f1c5162b4edabbe09b054deaf4e4132d32fb5043eecce39de72d79d 41abecf47c6f06d0bac09a7486a88e10860a7927ffef9d17eb26914150612dff 5cd7542053b678d02c4e06433340f520eb5c20afae4506e70b38a99eec3440ca aff264368368eaa98382498c90eb66582bf30743b58a08cb766ca025cf1abe7e: Process exited with status 1
	stdout:
	d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96
	6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530
	6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f
	a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075
	9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed
	9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0
	
	stderr:
	E0930 20:54:23.923036    2779 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef\": container with ID starting with 3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef not found: ID does not exist" containerID="3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef"
	time="2024-09-30T20:54:23Z" level=fatal msg="stopping the container \"3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef\": rpc error: code = NotFound desc = could not find container \"3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef\": container with ID starting with 3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef not found: ID does not exist"
	I0930 20:54:23.933756   58154 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 20:54:23.975089   58154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 20:54:23.985435   58154 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Sep 30 20:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Sep 30 20:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep 30 20:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Sep 30 20:53 /etc/kubernetes/scheduler.conf
	
	I0930 20:54:23.985505   58154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 20:54:23.994760   58154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 20:54:24.003280   58154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 20:54:24.012427   58154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0930 20:54:24.012496   58154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 20:54:24.021201   58154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 20:54:24.031196   58154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0930 20:54:24.031260   58154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 20:54:24.041279   58154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 20:54:24.051274   58154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:54:24.107678   58154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:54:24.995144   58154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:54:25.207853   58154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:54:25.277812   58154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:54:25.360328   58154 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:54:25.360418   58154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:54:25.860671   58154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:54:26.361137   58154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:54:26.378127   58154 api_server.go:72] duration metric: took 1.017796378s to wait for apiserver process to appear ...
	I0930 20:54:26.378154   58154 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:54:26.378188   58154 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0930 20:54:28.491915   58154 api_server.go:279] https://192.168.61.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 20:54:28.491961   58154 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 20:54:28.491975   58154 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0930 20:54:28.502941   58154 api_server.go:279] https://192.168.61.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 20:54:28.502969   58154 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 20:54:28.878438   58154 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0930 20:54:28.884801   58154 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 20:54:28.884834   58154 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 20:54:29.378360   58154 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0930 20:54:29.385945   58154 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 20:54:29.385975   58154 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 20:54:29.878611   58154 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0930 20:54:29.885680   58154 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
	ok
	I0930 20:54:29.894297   58154 api_server.go:141] control plane version: v1.31.1
	I0930 20:54:29.894332   58154 api_server.go:131] duration metric: took 3.516169713s to wait for apiserver health ...
	I0930 20:54:29.894342   58154 cni.go:84] Creating CNI manager for ""
	I0930 20:54:29.894350   58154 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:54:29.896194   58154 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 20:54:27.106159   58524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:54:27.106208   58524 machine.go:96] duration metric: took 10.559182632s to provisionDockerMachine
	I0930 20:54:27.106224   58524 start.go:293] postStartSetup for "kubernetes-upgrade-810093" (driver="kvm2")
	I0930 20:54:27.106239   58524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:54:27.106275   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:27.106615   58524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:54:27.106642   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:27.109339   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.109735   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:27.109772   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.109964   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:27.110146   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:27.110306   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:27.110432   58524 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa Username:docker}
	I0930 20:54:27.196865   58524 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:54:27.201655   58524 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:54:27.201686   58524 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:54:27.201767   58524 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:54:27.201860   58524 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:54:27.201979   58524 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:54:27.211444   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:54:27.239645   58524 start.go:296] duration metric: took 133.405084ms for postStartSetup
	I0930 20:54:27.239737   58524 fix.go:56] duration metric: took 10.715191174s for fixHost
	I0930 20:54:27.239766   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:27.242929   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.243280   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:27.243323   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.243464   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:27.243677   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:27.243796   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:27.243889   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:27.244002   58524 main.go:141] libmachine: Using SSH client type: native
	I0930 20:54:27.244201   58524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:54:27.244217   58524 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:54:27.361170   58524 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727729667.352637287
	
	I0930 20:54:27.361196   58524 fix.go:216] guest clock: 1727729667.352637287
	I0930 20:54:27.361224   58524 fix.go:229] Guest: 2024-09-30 20:54:27.352637287 +0000 UTC Remote: 2024-09-30 20:54:27.239746738 +0000 UTC m=+10.868526598 (delta=112.890549ms)
	I0930 20:54:27.361248   58524 fix.go:200] guest clock delta is within tolerance: 112.890549ms
	I0930 20:54:27.361254   58524 start.go:83] releasing machines lock for "kubernetes-upgrade-810093", held for 10.83672515s
	I0930 20:54:27.361285   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:27.361556   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetIP
	I0930 20:54:27.364452   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.364875   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:27.364910   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.365019   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:27.365552   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:27.365739   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:27.365861   58524 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:54:27.365903   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:27.366013   58524 ssh_runner.go:195] Run: cat /version.json
	I0930 20:54:27.366049   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:27.368953   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.369241   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.369418   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:27.369441   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.369627   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:27.369689   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:27.369723   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.369862   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:27.369971   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:27.370034   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:27.370102   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:27.370255   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:27.370251   58524 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa Username:docker}
	I0930 20:54:27.370409   58524 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa Username:docker}
	I0930 20:54:27.452636   58524 ssh_runner.go:195] Run: systemctl --version
	I0930 20:54:27.492180   58524 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:54:27.650320   58524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:54:27.655994   58524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:54:27.656071   58524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:54:27.665179   58524 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 20:54:27.665206   58524 start.go:495] detecting cgroup driver to use...
	I0930 20:54:27.665272   58524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:54:27.681182   58524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:54:27.695234   58524 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:54:27.695285   58524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:54:27.709000   58524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:54:27.731623   58524 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:54:28.008715   58524 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:54:28.262521   58524 docker.go:233] disabling docker service ...
	I0930 20:54:28.262616   58524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:54:28.308118   58524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:54:28.353775   58524 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:54:28.646774   58524 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:54:28.932430   58524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:54:28.980353   58524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:54:29.021273   58524 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:54:29.021724   58524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.056522   58524 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:54:29.056599   58524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.075013   58524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.123612   58524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.142264   58524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:54:29.178750   58524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.223682   58524 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.282890   58524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.325240   58524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:54:29.345470   58524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:54:29.360498   58524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:54:29.612610   58524 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:54:30.333439   58524 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:54:30.333515   58524 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:54:30.338136   58524 start.go:563] Will wait 60s for crictl version
	I0930 20:54:30.338203   58524 ssh_runner.go:195] Run: which crictl
	I0930 20:54:30.341700   58524 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:54:30.377939   58524 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:54:30.378012   58524 ssh_runner.go:195] Run: crio --version
	I0930 20:54:30.406111   58524 ssh_runner.go:195] Run: crio --version
	I0930 20:54:30.438503   58524 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:54:26.739719   57353 pod_ready.go:103] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:28.739915   57353 pod_ready.go:103] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:30.241347   57353 pod_ready.go:93] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"True"
	I0930 20:54:30.241379   57353 pod_ready.go:82] duration metric: took 36.008617454s for pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.241391   57353 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.247872   57353 pod_ready.go:93] pod "etcd-auto-207733" in "kube-system" namespace has status "Ready":"True"
	I0930 20:54:30.247947   57353 pod_ready.go:82] duration metric: took 6.5466ms for pod "etcd-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.248041   57353 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.256663   57353 pod_ready.go:93] pod "kube-apiserver-auto-207733" in "kube-system" namespace has status "Ready":"True"
	I0930 20:54:30.256689   57353 pod_ready.go:82] duration metric: took 8.617328ms for pod "kube-apiserver-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.256700   57353 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.261034   57353 pod_ready.go:93] pod "kube-controller-manager-auto-207733" in "kube-system" namespace has status "Ready":"True"
	I0930 20:54:30.261055   57353 pod_ready.go:82] duration metric: took 4.348056ms for pod "kube-controller-manager-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.261064   57353 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-z2mt2" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.265186   57353 pod_ready.go:93] pod "kube-proxy-z2mt2" in "kube-system" namespace has status "Ready":"True"
	I0930 20:54:30.265214   57353 pod_ready.go:82] duration metric: took 4.136042ms for pod "kube-proxy-z2mt2" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.265223   57353 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.637769   57353 pod_ready.go:93] pod "kube-scheduler-auto-207733" in "kube-system" namespace has status "Ready":"True"
	I0930 20:54:30.637797   57353 pod_ready.go:82] duration metric: took 372.567298ms for pod "kube-scheduler-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.637807   57353 pod_ready.go:39] duration metric: took 38.438907918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:54:30.637827   57353 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:54:30.637888   57353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:54:30.654943   57353 api_server.go:72] duration metric: took 39.225945577s to wait for apiserver process to appear ...
	I0930 20:54:30.654976   57353 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:54:30.654999   57353 api_server.go:253] Checking apiserver healthz at https://192.168.72.4:8443/healthz ...
	I0930 20:54:30.660518   57353 api_server.go:279] https://192.168.72.4:8443/healthz returned 200:
	ok
	I0930 20:54:30.661642   57353 api_server.go:141] control plane version: v1.31.1
	I0930 20:54:30.661671   57353 api_server.go:131] duration metric: took 6.686785ms to wait for apiserver health ...
	I0930 20:54:30.661682   57353 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:54:30.840665   57353 system_pods.go:59] 7 kube-system pods found
	I0930 20:54:30.840691   57353 system_pods.go:61] "coredns-7c65d6cfc9-tczkl" [52468f44-63ba-4980-9cdf-baf67bab43e3] Running
	I0930 20:54:30.840696   57353 system_pods.go:61] "etcd-auto-207733" [c79ec68b-cf11-492d-9ee9-6ec3502e45a1] Running
	I0930 20:54:30.840700   57353 system_pods.go:61] "kube-apiserver-auto-207733" [9dad9b65-d298-4402-b782-84e66da90018] Running
	I0930 20:54:30.840703   57353 system_pods.go:61] "kube-controller-manager-auto-207733" [c4784ae3-59de-4fe4-b205-63482fb4197f] Running
	I0930 20:54:30.840706   57353 system_pods.go:61] "kube-proxy-z2mt2" [f997ca6c-9e4d-4a43-b3a4-ae1f3537dba8] Running
	I0930 20:54:30.840709   57353 system_pods.go:61] "kube-scheduler-auto-207733" [cc5e6cb5-feb1-4872-bab3-ed7a6479acdc] Running
	I0930 20:54:30.840712   57353 system_pods.go:61] "storage-provisioner" [25455db4-3b50-4f09-8636-d6cdc7d5fad6] Running
	I0930 20:54:30.840718   57353 system_pods.go:74] duration metric: took 179.029902ms to wait for pod list to return data ...
	I0930 20:54:30.840726   57353 default_sa.go:34] waiting for default service account to be created ...
	I0930 20:54:31.037209   57353 default_sa.go:45] found service account: "default"
	I0930 20:54:31.037234   57353 default_sa.go:55] duration metric: took 196.501819ms for default service account to be created ...
	I0930 20:54:31.037244   57353 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 20:54:31.241470   57353 system_pods.go:86] 7 kube-system pods found
	I0930 20:54:31.241510   57353 system_pods.go:89] "coredns-7c65d6cfc9-tczkl" [52468f44-63ba-4980-9cdf-baf67bab43e3] Running
	I0930 20:54:31.241521   57353 system_pods.go:89] "etcd-auto-207733" [c79ec68b-cf11-492d-9ee9-6ec3502e45a1] Running
	I0930 20:54:31.241529   57353 system_pods.go:89] "kube-apiserver-auto-207733" [9dad9b65-d298-4402-b782-84e66da90018] Running
	I0930 20:54:31.241536   57353 system_pods.go:89] "kube-controller-manager-auto-207733" [c4784ae3-59de-4fe4-b205-63482fb4197f] Running
	I0930 20:54:31.241542   57353 system_pods.go:89] "kube-proxy-z2mt2" [f997ca6c-9e4d-4a43-b3a4-ae1f3537dba8] Running
	I0930 20:54:31.241548   57353 system_pods.go:89] "kube-scheduler-auto-207733" [cc5e6cb5-feb1-4872-bab3-ed7a6479acdc] Running
	I0930 20:54:31.241555   57353 system_pods.go:89] "storage-provisioner" [25455db4-3b50-4f09-8636-d6cdc7d5fad6] Running
	I0930 20:54:31.241564   57353 system_pods.go:126] duration metric: took 204.31414ms to wait for k8s-apps to be running ...
	I0930 20:54:31.241575   57353 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 20:54:31.241641   57353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:54:31.262030   57353 system_svc.go:56] duration metric: took 20.447247ms WaitForService to wait for kubelet
	I0930 20:54:31.262064   57353 kubeadm.go:582] duration metric: took 39.833072087s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:54:31.262084   57353 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:54:30.439747   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetIP
	I0930 20:54:30.442412   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:30.442798   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:30.442821   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:30.443031   58524 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:54:30.447209   58524 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-810093 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-810093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 20:54:30.447323   58524 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:54:30.447390   58524 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:54:30.490340   58524 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:54:30.490370   58524 crio.go:433] Images already preloaded, skipping extraction
	I0930 20:54:30.490430   58524 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:54:30.532235   58524 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:54:30.532294   58524 cache_images.go:84] Images are preloaded, skipping loading
	I0930 20:54:30.532303   58524 kubeadm.go:934] updating node { 192.168.39.233 8443 v1.31.1 crio true true} ...
	I0930 20:54:30.532424   58524 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-810093 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-810093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:54:30.532496   58524 ssh_runner.go:195] Run: crio config
	I0930 20:54:30.581350   58524 cni.go:84] Creating CNI manager for ""
	I0930 20:54:30.581378   58524 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:54:30.581389   58524 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 20:54:30.581419   58524 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.233 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-810093 NodeName:kubernetes-upgrade-810093 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 20:54:30.581547   58524 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.233
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-810093"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 20:54:30.581602   58524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:54:30.591728   58524 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 20:54:30.591798   58524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 20:54:30.600603   58524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0930 20:54:30.617718   58524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:54:30.640781   58524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0930 20:54:30.658851   58524 ssh_runner.go:195] Run: grep 192.168.39.233	control-plane.minikube.internal$ /etc/hosts
	I0930 20:54:30.662848   58524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:54:30.819972   58524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:54:30.837304   58524 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093 for IP: 192.168.39.233
	I0930 20:54:30.837331   58524 certs.go:194] generating shared ca certs ...
	I0930 20:54:30.837351   58524 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:54:30.837516   58524 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:54:30.837561   58524 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:54:30.837574   58524 certs.go:256] generating profile certs ...
	I0930 20:54:30.837671   58524 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/client.key
	I0930 20:54:30.837740   58524 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.key.372be7b4
	I0930 20:54:30.837788   58524 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.key
	I0930 20:54:30.837957   58524 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:54:30.837994   58524 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:54:30.838002   58524 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:54:30.838035   58524 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:54:30.838083   58524 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:54:30.838118   58524 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:54:30.838178   58524 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:54:30.838969   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:54:30.865772   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:54:30.890599   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:54:30.915638   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:54:30.944893   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0930 20:54:30.972527   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 20:54:31.004423   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:54:31.102054   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 20:54:31.157884   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:54:31.297005   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:54:31.384812   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:54:31.438441   57353 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:54:31.438476   57353 node_conditions.go:123] node cpu capacity is 2
	I0930 20:54:31.438489   57353 node_conditions.go:105] duration metric: took 176.400025ms to run NodePressure ...
	I0930 20:54:31.438503   57353 start.go:241] waiting for startup goroutines ...
	I0930 20:54:31.438512   57353 start.go:246] waiting for cluster config update ...
	I0930 20:54:31.438526   57353 start.go:255] writing updated cluster config ...
	I0930 20:54:31.438882   57353 ssh_runner.go:195] Run: rm -f paused
	I0930 20:54:31.499345   57353 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 20:54:31.501096   57353 out.go:177] * Done! kubectl is now configured to use "auto-207733" cluster and "default" namespace by default
	I0930 20:54:29.898157   58154 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 20:54:29.910108   58154 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 20:54:29.941701   58154 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:54:29.941781   58154 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 20:54:29.941802   58154 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 20:54:29.960562   58154 system_pods.go:59] 6 kube-system pods found
	I0930 20:54:29.960603   58154 system_pods.go:61] "coredns-7c65d6cfc9-7jtvv" [217905ec-7c19-4f7f-93a2-a2d868627822] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 20:54:29.960614   58154 system_pods.go:61] "etcd-pause-617008" [c80a4218-bec8-45d2-be81-dccaefc8667e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 20:54:29.960627   58154 system_pods.go:61] "kube-apiserver-pause-617008" [d3d6691c-dfc5-499a-91ea-0a032963226b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 20:54:29.960636   58154 system_pods.go:61] "kube-controller-manager-pause-617008" [ea889f38-5e38-43b2-bb57-80c8021aa6b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 20:54:29.960642   58154 system_pods.go:61] "kube-proxy-mpb8x" [e1a0b9be-01bb-4b6d-ba51-f0982b68ef99] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 20:54:29.960650   58154 system_pods.go:61] "kube-scheduler-pause-617008" [c663602c-9191-4c37-8426-c1d0f489f981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 20:54:29.960657   58154 system_pods.go:74] duration metric: took 18.934046ms to wait for pod list to return data ...
	I0930 20:54:29.960667   58154 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:54:29.966206   58154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:54:29.966260   58154 node_conditions.go:123] node cpu capacity is 2
	I0930 20:54:29.966275   58154 node_conditions.go:105] duration metric: took 5.603521ms to run NodePressure ...
	I0930 20:54:29.966297   58154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:54:30.234915   58154 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 20:54:30.240125   58154 kubeadm.go:739] kubelet initialised
	I0930 20:54:30.240159   58154 kubeadm.go:740] duration metric: took 5.216089ms waiting for restarted kubelet to initialise ...
	I0930 20:54:30.240170   58154 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:54:30.245491   58154 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7jtvv" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:32.254513   58154 pod_ready.go:103] pod "coredns-7c65d6cfc9-7jtvv" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:31.533047   58524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 20:54:31.597876   58524 ssh_runner.go:195] Run: openssl version
	I0930 20:54:31.636045   58524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:54:31.661698   58524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:54:31.669043   58524 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:54:31.669103   58524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:54:31.678558   58524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:54:31.695384   58524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:54:31.714372   58524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:54:31.726942   58524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:54:31.727005   58524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:54:31.741475   58524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:54:31.757991   58524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:54:31.797856   58524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:54:31.810783   58524 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:54:31.810848   58524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:54:31.824337   58524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:54:31.844720   58524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:54:31.851869   58524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 20:54:31.860036   58524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 20:54:31.869642   58524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 20:54:31.883657   58524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 20:54:31.904877   58524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 20:54:31.917114   58524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 20:54:31.930684   58524 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-810093 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-810093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:54:31.930802   58524 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 20:54:31.930864   58524 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 20:54:32.021788   58524 cri.go:89] found id: "c50a6855e5a06b82abda38e7eee5fddd157ea694fed5ff2e3ae6a698d2e775aa"
	I0930 20:54:32.021813   58524 cri.go:89] found id: "001a67edfd9f5a38b0b3e7f99a5d666ab33b2bdeaa3c2dea2d16622669526006"
	I0930 20:54:32.021819   58524 cri.go:89] found id: "5fbaf5340219dd96a03e65f0db4a8f2425f3922473b49a407d6261c8b3081d6c"
	I0930 20:54:32.021825   58524 cri.go:89] found id: "67efcceedcc1938ff6f66b0ea6cc17e34bda9faafb53db30f477c06b6e67ce7e"
	I0930 20:54:32.021829   58524 cri.go:89] found id: "8a55eb55be7941855076f7edfd053f259d99b7260e868cee65a9ba01a4c35171"
	I0930 20:54:32.021833   58524 cri.go:89] found id: "7a5f2f3caa6596bdeec851c98c42549151d6a8fa20b96b21a5a850b7c0e5424c"
	I0930 20:54:32.021836   58524 cri.go:89] found id: "7b10de2fc402b0e70a6a180c0350151ea8f75757a4a692274cbf1cc58d95e9b9"
	I0930 20:54:32.021840   58524 cri.go:89] found id: "ebecc3e71f4007d0365b4c87543b72b9d2c5f5ba7d29835a0ce6761e88986df4"
	I0930 20:54:32.021845   58524 cri.go:89] found id: "10d74335ce942a9f9482799a8e322e5314127630a6cac376ae8923cf08c520db"
	I0930 20:54:32.021853   58524 cri.go:89] found id: ""
	I0930 20:54:32.021897   58524 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.287835465Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729686287813203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cab8be6a-509d-4740-a6ae-50feec54c4a2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.288562344Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24985059-6ded-47f0-9b18-e70ca93a7855 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.288651962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24985059-6ded-47f0-9b18-e70ca93a7855 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.288898866Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7118ce479d2635a4e7550c31a9d22c2f863135d554de6a46c77aa7b8b1237af6,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727729669638597851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a38406394d57b781d00f1c6949dda32f0bc7ab2d9c8022aa6525eeab5699fd,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729669643200532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef070e92a1237334653cb1f54677b6077ac4c0716c930e4acbc64d66c2e718e3,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727729665820243636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5070854549a909ce0263bb405f59a9c881a9590d7303e44d3bcb7695008b3aa2,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727729665819105767,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c406f1b55b82a495fbc3c253efe04199bd5a617299f3bd86ece750a5574c725,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727729665807579665,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84c76c04fe41849783f07c21abf853b99edbd2ea733aae494635d5daf54a2ac8,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727729665797040371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727729642140241359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729642778719998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727729642138848066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727729642117186921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727729642093702904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727729642044669077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 0a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24985059-6ded-47f0-9b18-e70ca93a7855 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.339009668Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e589634-68e4-4547-92b6-1a6033deb09e name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.339159893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e589634-68e4-4547-92b6-1a6033deb09e name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.340583347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4f3e268a-e90e-4ce2-8035-c2c045a85acb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.341019117Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729686340990407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f3e268a-e90e-4ce2-8035-c2c045a85acb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.341787105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fcd8eff-1d77-4ae8-b41b-336e37e9236c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.341880507Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fcd8eff-1d77-4ae8-b41b-336e37e9236c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.342266467Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7118ce479d2635a4e7550c31a9d22c2f863135d554de6a46c77aa7b8b1237af6,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727729669638597851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a38406394d57b781d00f1c6949dda32f0bc7ab2d9c8022aa6525eeab5699fd,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729669643200532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef070e92a1237334653cb1f54677b6077ac4c0716c930e4acbc64d66c2e718e3,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727729665820243636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5070854549a909ce0263bb405f59a9c881a9590d7303e44d3bcb7695008b3aa2,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727729665819105767,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c406f1b55b82a495fbc3c253efe04199bd5a617299f3bd86ece750a5574c725,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727729665807579665,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84c76c04fe41849783f07c21abf853b99edbd2ea733aae494635d5daf54a2ac8,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727729665797040371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727729642140241359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729642778719998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727729642138848066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727729642117186921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727729642093702904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727729642044669077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 0a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fcd8eff-1d77-4ae8-b41b-336e37e9236c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.383487107Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56f180af-cdfa-459a-b6a3-8b95e4e04463 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.383578300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56f180af-cdfa-459a-b6a3-8b95e4e04463 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.393323384Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ace40439-b06d-4b33-871e-df4c6f5277aa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.393710496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729686393685325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ace40439-b06d-4b33-871e-df4c6f5277aa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.394241882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f908f2f5-b2e7-4929-9d7d-0e7559e23013 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.394338799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f908f2f5-b2e7-4929-9d7d-0e7559e23013 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.394638171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7118ce479d2635a4e7550c31a9d22c2f863135d554de6a46c77aa7b8b1237af6,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727729669638597851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a38406394d57b781d00f1c6949dda32f0bc7ab2d9c8022aa6525eeab5699fd,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729669643200532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef070e92a1237334653cb1f54677b6077ac4c0716c930e4acbc64d66c2e718e3,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727729665820243636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5070854549a909ce0263bb405f59a9c881a9590d7303e44d3bcb7695008b3aa2,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727729665819105767,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c406f1b55b82a495fbc3c253efe04199bd5a617299f3bd86ece750a5574c725,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727729665807579665,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84c76c04fe41849783f07c21abf853b99edbd2ea733aae494635d5daf54a2ac8,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727729665797040371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727729642140241359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729642778719998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727729642138848066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727729642117186921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727729642093702904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727729642044669077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 0a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f908f2f5-b2e7-4929-9d7d-0e7559e23013 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.438391694Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38e23908-9b80-40d9-902b-4fd3740062bf name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.438505630Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38e23908-9b80-40d9-902b-4fd3740062bf name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.440171166Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12dbfe66-26fe-4dea-97f6-8565aebe367c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.440756233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729686440717815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12dbfe66-26fe-4dea-97f6-8565aebe367c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.441610013Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bb992de-28da-47f5-834c-c8e3c41bcf95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.441685298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bb992de-28da-47f5-834c-c8e3c41bcf95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:46 pause-617008 crio[2080]: time="2024-09-30 20:54:46.441974417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7118ce479d2635a4e7550c31a9d22c2f863135d554de6a46c77aa7b8b1237af6,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727729669638597851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a38406394d57b781d00f1c6949dda32f0bc7ab2d9c8022aa6525eeab5699fd,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729669643200532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef070e92a1237334653cb1f54677b6077ac4c0716c930e4acbc64d66c2e718e3,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727729665820243636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5070854549a909ce0263bb405f59a9c881a9590d7303e44d3bcb7695008b3aa2,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727729665819105767,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c406f1b55b82a495fbc3c253efe04199bd5a617299f3bd86ece750a5574c725,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727729665807579665,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84c76c04fe41849783f07c21abf853b99edbd2ea733aae494635d5daf54a2ac8,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727729665797040371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727729642140241359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729642778719998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727729642138848066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727729642117186921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727729642093702904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727729642044669077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 0a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bb992de-28da-47f5-834c-c8e3c41bcf95 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b5a38406394d5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 seconds ago      Running             coredns                   2                   097125b65354f       coredns-7c65d6cfc9-7jtvv
	7118ce479d263       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 seconds ago      Running             kube-proxy                2                   830f6c48dccb2       kube-proxy-mpb8x
	ef070e92a1237       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   20 seconds ago      Running             kube-controller-manager   2                   6cf1ec062fe15       kube-controller-manager-pause-617008
	5070854549a90       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   20 seconds ago      Running             kube-apiserver            2                   1a629caf531f7       kube-apiserver-pause-617008
	1c406f1b55b82       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   20 seconds ago      Running             etcd                      2                   f47dc8cbc8ce8       etcd-pause-617008
	84c76c04fe418       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   20 seconds ago      Running             kube-scheduler            2                   d045b6935a02b       kube-scheduler-pause-617008
	d296639a17992       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   43 seconds ago      Exited              coredns                   1                   097125b65354f       coredns-7c65d6cfc9-7jtvv
	6df05d8d5b78e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   44 seconds ago      Exited              kube-proxy                1                   830f6c48dccb2       kube-proxy-mpb8x
	6f035fcf4f276       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   44 seconds ago      Exited              kube-scheduler            1                   d045b6935a02b       kube-scheduler-pause-617008
	a84693bd8b3d5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   44 seconds ago      Exited              etcd                      1                   f47dc8cbc8ce8       etcd-pause-617008
	9a3af029d6ae8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   44 seconds ago      Exited              kube-controller-manager   1                   6cf1ec062fe15       kube-controller-manager-pause-617008
	9a8345e346ba0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   44 seconds ago      Exited              kube-apiserver            1                   1a629caf531f7       kube-apiserver-pause-617008
	
	
	==> coredns [b5a38406394d57b781d00f1c6949dda32f0bc7ab2d9c8022aa6525eeab5699fd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38314 - 62705 "HINFO IN 2204494186670646042.879768081321963679. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.018203503s
	
	
	==> coredns [d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:52321 - 34001 "HINFO IN 5517243117269935179.7094196997823424969. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010455896s
	
	
	==> describe nodes <==
	Name:               pause-617008
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-617008
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=pause-617008
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T20_53_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:53:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-617008
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:54:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:54:28 +0000   Mon, 30 Sep 2024 20:53:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:54:28 +0000   Mon, 30 Sep 2024 20:53:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:54:28 +0000   Mon, 30 Sep 2024 20:53:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:54:28 +0000   Mon, 30 Sep 2024 20:53:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.245
	  Hostname:    pause-617008
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b8c8ed9110b44729164b24a1bed867e
	  System UUID:                4b8c8ed9-110b-4472-9164-b24a1bed867e
	  Boot ID:                    d897aff9-f75f-466e-820f-0921e36f5ff6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7jtvv                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     82s
	  kube-system                 etcd-pause-617008                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         87s
	  kube-system                 kube-apiserver-pause-617008             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-pause-617008    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-mpb8x                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-scheduler-pause-617008             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 80s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 39s                kube-proxy       
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s                kubelet          Node pause-617008 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s                kubelet          Node pause-617008 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s                kubelet          Node pause-617008 status is now: NodeHasSufficientPID
	  Normal  NodeReady                86s                kubelet          Node pause-617008 status is now: NodeReady
	  Normal  RegisteredNode           83s                node-controller  Node pause-617008 event: Registered Node pause-617008 in Controller
	  Normal  RegisteredNode           37s                node-controller  Node pause-617008 event: Registered Node pause-617008 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node pause-617008 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node pause-617008 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node pause-617008 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-617008 event: Registered Node pause-617008 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep30 20:53] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.072884] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058885] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.203009] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.121454] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.283580] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.987717] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +4.886427] systemd-fstab-generator[886]: Ignoring "noauto" option for root device
	[  +0.063602] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.495300] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[  +0.094466] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.732538] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.921452] kauditd_printk_skb: 43 callbacks suppressed
	[ +34.437837] systemd-fstab-generator[2003]: Ignoring "noauto" option for root device
	[  +0.088962] kauditd_printk_skb: 45 callbacks suppressed
	[  +0.087541] systemd-fstab-generator[2015]: Ignoring "noauto" option for root device
	[  +0.181814] systemd-fstab-generator[2029]: Ignoring "noauto" option for root device
	[  +0.138112] systemd-fstab-generator[2042]: Ignoring "noauto" option for root device
	[  +0.267697] systemd-fstab-generator[2070]: Ignoring "noauto" option for root device
	[Sep30 20:54] systemd-fstab-generator[2269]: Ignoring "noauto" option for root device
	[  +5.166793] kauditd_printk_skb: 195 callbacks suppressed
	[ +18.133791] systemd-fstab-generator[3120]: Ignoring "noauto" option for root device
	[  +8.546103] kauditd_printk_skb: 46 callbacks suppressed
	[  +9.205476] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	
	
	==> etcd [1c406f1b55b82a495fbc3c253efe04199bd5a617299f3bd86ece750a5574c725] <==
	{"level":"info","ts":"2024-09-30T20:54:26.083697Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-30T20:54:26.083721Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-30T20:54:26.083741Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-30T20:54:26.083914Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.245:2380"}
	{"level":"info","ts":"2024-09-30T20:54:26.083922Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.245:2380"}
	{"level":"info","ts":"2024-09-30T20:54:26.085307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 switched to configuration voters=(16267170017011773379)"}
	{"level":"info","ts":"2024-09-30T20:54:26.087033Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f0ed87b681c0ac99","local-member-id":"e1c098f17cdf2fc3","added-peer-id":"e1c098f17cdf2fc3","added-peer-peer-urls":["https://192.168.61.245:2380"]}
	{"level":"info","ts":"2024-09-30T20:54:26.087312Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0ed87b681c0ac99","local-member-id":"e1c098f17cdf2fc3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T20:54:26.087946Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T20:54:27.156745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:27.156861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:27.156921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 received MsgPreVoteResp from e1c098f17cdf2fc3 at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:27.156956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 became candidate at term 4"}
	{"level":"info","ts":"2024-09-30T20:54:27.156981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 received MsgVoteResp from e1c098f17cdf2fc3 at term 4"}
	{"level":"info","ts":"2024-09-30T20:54:27.157008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 became leader at term 4"}
	{"level":"info","ts":"2024-09-30T20:54:27.157034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e1c098f17cdf2fc3 elected leader e1c098f17cdf2fc3 at term 4"}
	{"level":"info","ts":"2024-09-30T20:54:27.161631Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e1c098f17cdf2fc3","local-member-attributes":"{Name:pause-617008 ClientURLs:[https://192.168.61.245:2379]}","request-path":"/0/members/e1c098f17cdf2fc3/attributes","cluster-id":"f0ed87b681c0ac99","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T20:54:27.161874Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:54:27.162768Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:54:27.163122Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T20:54:27.163161Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T20:54:27.163172Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:54:27.163688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T20:54:27.165573Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:54:27.166329Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.245:2379"}
	
	
	==> etcd [a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075] <==
	{"level":"info","ts":"2024-09-30T20:54:04.208595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T20:54:04.208611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 received MsgPreVoteResp from e1c098f17cdf2fc3 at term 2"}
	{"level":"info","ts":"2024-09-30T20:54:04.208627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 became candidate at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:04.208660Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 received MsgVoteResp from e1c098f17cdf2fc3 at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:04.208679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 became leader at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:04.208699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e1c098f17cdf2fc3 elected leader e1c098f17cdf2fc3 at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:04.210453Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e1c098f17cdf2fc3","local-member-attributes":"{Name:pause-617008 ClientURLs:[https://192.168.61.245:2379]}","request-path":"/0/members/e1c098f17cdf2fc3/attributes","cluster-id":"f0ed87b681c0ac99","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T20:54:04.210684Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:54:04.211110Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:54:04.211858Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:54:04.212809Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T20:54:04.213565Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:54:04.214134Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T20:54:04.214161Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T20:54:04.222459Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.245:2379"}
	{"level":"info","ts":"2024-09-30T20:54:13.530740Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-30T20:54:13.530783Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-617008","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.245:2380"],"advertise-client-urls":["https://192.168.61.245:2379"]}
	{"level":"warn","ts":"2024-09-30T20:54:13.530878Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:54:13.530990Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:54:13.549686Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.245:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:54:13.549735Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.245:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-30T20:54:13.551162Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e1c098f17cdf2fc3","current-leader-member-id":"e1c098f17cdf2fc3"}
	{"level":"info","ts":"2024-09-30T20:54:13.554509Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.61.245:2380"}
	{"level":"info","ts":"2024-09-30T20:54:13.554604Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.61.245:2380"}
	{"level":"info","ts":"2024-09-30T20:54:13.554614Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-617008","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.245:2380"],"advertise-client-urls":["https://192.168.61.245:2379"]}
	
	
	==> kernel <==
	 20:54:46 up 2 min,  0 users,  load average: 0.61, 0.31, 0.12
	Linux pause-617008 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5070854549a909ce0263bb405f59a9c881a9590d7303e44d3bcb7695008b3aa2] <==
	I0930 20:54:28.558796       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 20:54:28.561266       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 20:54:28.561783       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 20:54:28.574668       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0930 20:54:28.575699       1 aggregator.go:171] initial CRD sync complete...
	I0930 20:54:28.575730       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 20:54:28.575739       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 20:54:28.575746       1 cache.go:39] Caches are synced for autoregister controller
	I0930 20:54:28.585627       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 20:54:28.585753       1 policy_source.go:224] refreshing policies
	I0930 20:54:28.593535       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 20:54:28.593891       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0930 20:54:28.610733       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0930 20:54:28.636571       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 20:54:28.641186       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0930 20:54:28.641218       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 20:54:28.646489       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0930 20:54:29.445431       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0930 20:54:30.095929       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 20:54:30.111710       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 20:54:30.146736       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 20:54:30.185551       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0930 20:54:30.193932       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0930 20:54:33.624807       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 20:54:33.628339       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0] <==
	W0930 20:54:22.845207       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:22.869200       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:22.891175       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:22.918443       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:22.938317       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:22.955166       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.030605       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.038308       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.104801       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.116430       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.127853       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.170042       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.195491       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.234718       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.260521       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.319217       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.344015       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.351584       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.412302       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.428607       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.465443       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.482615       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.537332       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.708862       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.843466       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed] <==
	I0930 20:54:09.008399       1 shared_informer.go:320] Caches are synced for node
	I0930 20:54:09.008469       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0930 20:54:09.008487       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0930 20:54:09.008492       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0930 20:54:09.008497       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0930 20:54:09.008581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-617008"
	I0930 20:54:09.010109       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0930 20:54:09.011309       1 shared_informer.go:320] Caches are synced for HPA
	I0930 20:54:09.021539       1 shared_informer.go:320] Caches are synced for job
	I0930 20:54:09.027967       1 shared_informer.go:320] Caches are synced for cronjob
	I0930 20:54:09.038283       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0930 20:54:09.051702       1 shared_informer.go:320] Caches are synced for daemon sets
	I0930 20:54:09.054096       1 shared_informer.go:320] Caches are synced for taint
	I0930 20:54:09.054279       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0930 20:54:09.054359       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-617008"
	I0930 20:54:09.054460       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0930 20:54:09.102111       1 shared_informer.go:320] Caches are synced for deployment
	I0930 20:54:09.142678       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0930 20:54:09.143032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.287µs"
	I0930 20:54:09.157902       1 shared_informer.go:320] Caches are synced for disruption
	I0930 20:54:09.180875       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 20:54:09.222511       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 20:54:09.617230       1 shared_informer.go:320] Caches are synced for garbage collector
	I0930 20:54:09.617511       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0930 20:54:09.647921       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [ef070e92a1237334653cb1f54677b6077ac4c0716c930e4acbc64d66c2e718e3] <==
	I0930 20:54:32.014663       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0930 20:54:32.014803       1 shared_informer.go:320] Caches are synced for daemon sets
	I0930 20:54:32.015281       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-617008"
	I0930 20:54:32.017181       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0930 20:54:32.023513       1 shared_informer.go:320] Caches are synced for TTL
	I0930 20:54:32.034792       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0930 20:54:32.058120       1 shared_informer.go:320] Caches are synced for GC
	I0930 20:54:32.062905       1 shared_informer.go:320] Caches are synced for attach detach
	I0930 20:54:32.066151       1 shared_informer.go:320] Caches are synced for persistent volume
	I0930 20:54:32.068035       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0930 20:54:32.070367       1 shared_informer.go:320] Caches are synced for node
	I0930 20:54:32.070479       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0930 20:54:32.070587       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0930 20:54:32.070611       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0930 20:54:32.070683       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0930 20:54:32.070892       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-617008"
	I0930 20:54:32.113367       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0930 20:54:32.170521       1 shared_informer.go:320] Caches are synced for endpoint
	I0930 20:54:32.182999       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 20:54:32.222269       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 20:54:32.639454       1 shared_informer.go:320] Caches are synced for garbage collector
	I0930 20:54:32.642860       1 shared_informer.go:320] Caches are synced for garbage collector
	I0930 20:54:32.642959       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0930 20:54:33.640570       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="33.160597ms"
	I0930 20:54:33.640683       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="54.762µs"
	
	
	==> kube-proxy [6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530] <==
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:54:03.797606       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:54:05.714405       1 server.go:666] "Failed to retrieve node info" err="nodes \"pause-617008\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]"
	I0930 20:54:06.770936       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.245"]
	E0930 20:54:06.771016       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:54:06.806765       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:54:06.806805       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:54:06.806831       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:54:06.809895       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:54:06.810720       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:54:06.810801       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:54:06.812767       1 config.go:199] "Starting service config controller"
	I0930 20:54:06.812865       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:54:06.813031       1 config.go:328] "Starting node config controller"
	I0930 20:54:06.813126       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:54:06.813472       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:54:06.813495       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:54:06.913871       1 shared_informer.go:320] Caches are synced for service config
	I0930 20:54:06.913886       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 20:54:06.914045       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7118ce479d2635a4e7550c31a9d22c2f863135d554de6a46c77aa7b8b1237af6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:54:29.850119       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 20:54:29.861919       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.245"]
	E0930 20:54:29.862032       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:54:29.906548       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:54:29.906632       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:54:29.906665       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:54:29.912958       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:54:29.913545       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:54:29.913794       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:54:29.916293       1 config.go:199] "Starting service config controller"
	I0930 20:54:29.916342       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:54:29.916417       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:54:29.916440       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:54:29.917022       1 config.go:328] "Starting node config controller"
	I0930 20:54:29.919122       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:54:30.016631       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 20:54:30.016714       1 shared_informer.go:320] Caches are synced for service config
	I0930 20:54:30.019720       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f] <==
	E0930 20:54:05.714300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 20:54:05.714412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 20:54:05.714700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 20:54:05.714858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713638       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 20:54:05.717221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713699       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 20:54:05.717288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 20:54:05.717356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 20:54:05.717409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713832       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 20:54:05.717462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 20:54:05.717529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 20:54:05.717585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 20:54:05.717660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0930 20:54:07.063879       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 20:54:13.491329       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [84c76c04fe41849783f07c21abf853b99edbd2ea733aae494635d5daf54a2ac8] <==
	I0930 20:54:26.674824       1 serving.go:386] Generated self-signed cert in-memory
	W0930 20:54:28.510759       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 20:54:28.510834       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 20:54:28.510844       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 20:54:28.510853       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 20:54:28.605439       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 20:54:28.605472       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:54:28.610515       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 20:54:28.615152       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 20:54:28.615208       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 20:54:28.615248       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 20:54:28.715543       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 20:54:25 pause-617008 kubelet[3127]: I0930 20:54:25.702590    3127 kubelet_node_status.go:72] "Attempting to register node" node="pause-617008"
	Sep 30 20:54:25 pause-617008 kubelet[3127]: E0930 20:54:25.703593    3127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.245:8443: connect: connection refused" node="pause-617008"
	Sep 30 20:54:25 pause-617008 kubelet[3127]: I0930 20:54:25.775391    3127 scope.go:117] "RemoveContainer" containerID="6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f"
	Sep 30 20:54:25 pause-617008 kubelet[3127]: I0930 20:54:25.775674    3127 scope.go:117] "RemoveContainer" containerID="9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed"
	Sep 30 20:54:25 pause-617008 kubelet[3127]: I0930 20:54:25.776976    3127 scope.go:117] "RemoveContainer" containerID="a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075"
	Sep 30 20:54:25 pause-617008 kubelet[3127]: I0930 20:54:25.780608    3127 scope.go:117] "RemoveContainer" containerID="9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0"
	Sep 30 20:54:25 pause-617008 kubelet[3127]: E0930 20:54:25.947429    3127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-617008?timeout=10s\": dial tcp 192.168.61.245:8443: connect: connection refused" interval="800ms"
	Sep 30 20:54:26 pause-617008 kubelet[3127]: I0930 20:54:26.105693    3127 kubelet_node_status.go:72] "Attempting to register node" node="pause-617008"
	Sep 30 20:54:26 pause-617008 kubelet[3127]: E0930 20:54:26.106695    3127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.245:8443: connect: connection refused" node="pause-617008"
	Sep 30 20:54:26 pause-617008 kubelet[3127]: I0930 20:54:26.908729    3127 kubelet_node_status.go:72] "Attempting to register node" node="pause-617008"
	Sep 30 20:54:28 pause-617008 kubelet[3127]: I0930 20:54:28.691274    3127 kubelet_node_status.go:111] "Node was previously registered" node="pause-617008"
	Sep 30 20:54:28 pause-617008 kubelet[3127]: I0930 20:54:28.691750    3127 kubelet_node_status.go:75] "Successfully registered node" node="pause-617008"
	Sep 30 20:54:28 pause-617008 kubelet[3127]: I0930 20:54:28.691879    3127 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 30 20:54:28 pause-617008 kubelet[3127]: I0930 20:54:28.693137    3127 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 30 20:54:29 pause-617008 kubelet[3127]: I0930 20:54:29.309462    3127 apiserver.go:52] "Watching apiserver"
	Sep 30 20:54:29 pause-617008 kubelet[3127]: I0930 20:54:29.330580    3127 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 30 20:54:29 pause-617008 kubelet[3127]: I0930 20:54:29.385236    3127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1a0b9be-01bb-4b6d-ba51-f0982b68ef99-xtables-lock\") pod \"kube-proxy-mpb8x\" (UID: \"e1a0b9be-01bb-4b6d-ba51-f0982b68ef99\") " pod="kube-system/kube-proxy-mpb8x"
	Sep 30 20:54:29 pause-617008 kubelet[3127]: I0930 20:54:29.385627    3127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1a0b9be-01bb-4b6d-ba51-f0982b68ef99-lib-modules\") pod \"kube-proxy-mpb8x\" (UID: \"e1a0b9be-01bb-4b6d-ba51-f0982b68ef99\") " pod="kube-system/kube-proxy-mpb8x"
	Sep 30 20:54:29 pause-617008 kubelet[3127]: I0930 20:54:29.614445    3127 scope.go:117] "RemoveContainer" containerID="6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530"
	Sep 30 20:54:29 pause-617008 kubelet[3127]: I0930 20:54:29.615298    3127 scope.go:117] "RemoveContainer" containerID="d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96"
	Sep 30 20:54:33 pause-617008 kubelet[3127]: I0930 20:54:33.587781    3127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 30 20:54:35 pause-617008 kubelet[3127]: E0930 20:54:35.405021    3127 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729675403264152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:54:35 pause-617008 kubelet[3127]: E0930 20:54:35.405842    3127 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729675403264152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:54:45 pause-617008 kubelet[3127]: E0930 20:54:45.407607    3127 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729685407134606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:54:45 pause-617008 kubelet[3127]: E0930 20:54:45.407634    3127 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729685407134606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 20:54:45.999779   58809 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19736-7672/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-617008 -n pause-617008
helpers_test.go:261: (dbg) Run:  kubectl --context pause-617008 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-617008 -n pause-617008
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-617008 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-617008 logs -n 25: (1.587430933s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-592556                | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC | 30 Sep 24 20:50 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-456540             | running-upgrade-456540    | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC | 30 Sep 24 20:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-241258             | stopped-upgrade-241258    | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC | 30 Sep 24 20:50 UTC |
	| ssh     | -p NoKubernetes-592556 sudo           | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-592556                | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC | 30 Sep 24 20:50 UTC |
	| start   | -p cert-options-280515                | cert-options-280515       | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC | 30 Sep 24 20:51 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-592556                | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:50 UTC | 30 Sep 24 20:51 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-280515 ssh               | cert-options-280515       | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:51 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-280515 -- sudo        | cert-options-280515       | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:51 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-280515                | cert-options-280515       | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:51 UTC |
	| start   | -p force-systemd-flag-188130          | force-systemd-flag-188130 | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:52 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-592556 sudo           | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-592556                | NoKubernetes-592556       | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:51 UTC |
	| start   | -p cert-expiration-988243             | cert-expiration-988243    | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:52 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-456540             | running-upgrade-456540    | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:51 UTC |
	| start   | -p pause-617008 --memory=2048         | pause-617008              | jenkins | v1.34.0 | 30 Sep 24 20:51 UTC | 30 Sep 24 20:53 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-188130 ssh cat     | force-systemd-flag-188130 | jenkins | v1.34.0 | 30 Sep 24 20:52 UTC | 30 Sep 24 20:52 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-188130          | force-systemd-flag-188130 | jenkins | v1.34.0 | 30 Sep 24 20:52 UTC | 30 Sep 24 20:52 UTC |
	| start   | -p auto-207733 --memory=3072          | auto-207733               | jenkins | v1.34.0 | 30 Sep 24 20:52 UTC | 30 Sep 24 20:54 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-810093          | kubernetes-upgrade-810093 | jenkins | v1.34.0 | 30 Sep 24 20:53 UTC | 30 Sep 24 20:53 UTC |
	| start   | -p kubernetes-upgrade-810093          | kubernetes-upgrade-810093 | jenkins | v1.34.0 | 30 Sep 24 20:53 UTC | 30 Sep 24 20:54 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-617008                       | pause-617008              | jenkins | v1.34.0 | 30 Sep 24 20:53 UTC | 30 Sep 24 20:54 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-810093          | kubernetes-upgrade-810093 | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-810093          | kubernetes-upgrade-810093 | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-207733 pgrep -a               | auto-207733               | jenkins | v1.34.0 | 30 Sep 24 20:54 UTC | 30 Sep 24 20:54 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 20:54:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 20:54:16.408370   58524 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:54:16.408487   58524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:54:16.408498   58524 out.go:358] Setting ErrFile to fd 2...
	I0930 20:54:16.408504   58524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:54:16.408716   58524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:54:16.409244   58524 out.go:352] Setting JSON to false
	I0930 20:54:16.410242   58524 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5799,"bootTime":1727723857,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 20:54:16.410344   58524 start.go:139] virtualization: kvm guest
	I0930 20:54:16.412177   58524 out.go:177] * [kubernetes-upgrade-810093] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 20:54:16.413449   58524 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 20:54:16.413477   58524 notify.go:220] Checking for updates...
	I0930 20:54:16.415972   58524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 20:54:16.417436   58524 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:54:16.418680   58524 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:54:16.420063   58524 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 20:54:16.421252   58524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 20:54:16.422871   58524 config.go:182] Loaded profile config "kubernetes-upgrade-810093": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:54:16.423248   58524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:54:16.423327   58524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:54:16.444261   58524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38439
	I0930 20:54:16.444804   58524 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:54:16.445431   58524 main.go:141] libmachine: Using API Version  1
	I0930 20:54:16.445454   58524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:54:16.445813   58524 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:54:16.446019   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:16.446279   58524 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 20:54:16.446639   58524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:54:16.446682   58524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:54:16.462674   58524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0930 20:54:16.463068   58524 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:54:16.463585   58524 main.go:141] libmachine: Using API Version  1
	I0930 20:54:16.463626   58524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:54:16.463949   58524 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:54:16.464112   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:16.501910   58524 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 20:54:16.503399   58524 start.go:297] selected driver: kvm2
	I0930 20:54:16.503411   58524 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-810093 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-810093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:54:16.503563   58524 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 20:54:16.504236   58524 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:54:16.504345   58524 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 20:54:16.519579   58524 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 20:54:16.520125   58524 cni.go:84] Creating CNI manager for ""
	I0930 20:54:16.520191   58524 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:54:16.520257   58524 start.go:340] cluster config:
	{Name:kubernetes-upgrade-810093 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-810093 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:54:16.520392   58524 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:54:16.522479   58524 out.go:177] * Starting "kubernetes-upgrade-810093" primary control-plane node in "kubernetes-upgrade-810093" cluster
	I0930 20:54:16.523985   58524 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:54:16.524029   58524 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 20:54:16.524039   58524 cache.go:56] Caching tarball of preloaded images
	I0930 20:54:16.524152   58524 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:54:16.524170   58524 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 20:54:16.524273   58524 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/config.json ...
	I0930 20:54:16.524463   58524 start.go:360] acquireMachinesLock for kubernetes-upgrade-810093: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:54:16.524517   58524 start.go:364] duration metric: took 31.247µs to acquireMachinesLock for "kubernetes-upgrade-810093"
	I0930 20:54:16.524536   58524 start.go:96] Skipping create...Using existing machine configuration
	I0930 20:54:16.524545   58524 fix.go:54] fixHost starting: 
	I0930 20:54:16.524819   58524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:54:16.524859   58524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:54:16.540027   58524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37757
	I0930 20:54:16.540555   58524 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:54:16.541130   58524 main.go:141] libmachine: Using API Version  1
	I0930 20:54:16.541162   58524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:54:16.541517   58524 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:54:16.541732   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:16.541886   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetState
	I0930 20:54:16.543781   58524 fix.go:112] recreateIfNeeded on kubernetes-upgrade-810093: state=Running err=<nil>
	W0930 20:54:16.543812   58524 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 20:54:16.545732   58524 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-810093" VM ...
	I0930 20:54:17.238822   57353 pod_ready.go:103] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:19.239781   57353 pod_ready.go:103] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:16.547009   58524 machine.go:93] provisionDockerMachine start ...
	I0930 20:54:16.547031   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:16.547247   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:16.550051   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.550531   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:16.550567   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.550708   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:16.550877   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:16.551015   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:16.551132   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:16.551313   58524 main.go:141] libmachine: Using SSH client type: native
	I0930 20:54:16.551524   58524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:54:16.551548   58524 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 20:54:16.681158   58524 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-810093
	
	I0930 20:54:16.681183   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetMachineName
	I0930 20:54:16.681411   58524 buildroot.go:166] provisioning hostname "kubernetes-upgrade-810093"
	I0930 20:54:16.681423   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetMachineName
	I0930 20:54:16.681567   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:16.684105   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.684427   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:16.684461   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.684701   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:16.684859   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:16.685002   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:16.685165   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:16.685296   58524 main.go:141] libmachine: Using SSH client type: native
	I0930 20:54:16.685515   58524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:54:16.685536   58524 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-810093 && echo "kubernetes-upgrade-810093" | sudo tee /etc/hostname
	I0930 20:54:16.821760   58524 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-810093
	
	I0930 20:54:16.821790   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:16.824746   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.825183   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:16.825216   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.825365   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:16.825555   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:16.825744   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:16.825891   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:16.826035   58524 main.go:141] libmachine: Using SSH client type: native
	I0930 20:54:16.826246   58524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:54:16.826264   58524 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-810093' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-810093/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-810093' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:54:16.940302   58524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:54:16.940343   58524 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:54:16.940381   58524 buildroot.go:174] setting up certificates
	I0930 20:54:16.940400   58524 provision.go:84] configureAuth start
	I0930 20:54:16.940420   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetMachineName
	I0930 20:54:16.940758   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetIP
	I0930 20:54:16.943681   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.944187   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:16.944217   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.944449   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:16.946752   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.947067   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:16.947097   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:16.947182   58524 provision.go:143] copyHostCerts
	I0930 20:54:16.947244   58524 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:54:16.947257   58524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:54:16.947324   58524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:54:16.947444   58524 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:54:16.947455   58524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:54:16.947486   58524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:54:16.947610   58524 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:54:16.947622   58524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:54:16.947654   58524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:54:16.947744   58524 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-810093 san=[127.0.0.1 192.168.39.233 kubernetes-upgrade-810093 localhost minikube]
	I0930 20:54:17.101285   58524 provision.go:177] copyRemoteCerts
	I0930 20:54:17.101372   58524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:54:17.101401   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:17.104029   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:17.104356   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:17.104378   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:17.104613   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:17.104790   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:17.104952   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:17.105064   58524 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa Username:docker}
	I0930 20:54:17.190481   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 20:54:17.214407   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:54:17.240048   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0930 20:54:17.262681   58524 provision.go:87] duration metric: took 322.262354ms to configureAuth
	I0930 20:54:17.262709   58524 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:54:17.262878   58524 config.go:182] Loaded profile config "kubernetes-upgrade-810093": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:54:17.262941   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:17.265672   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:17.266089   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:17.266117   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:17.266363   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:17.266578   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:17.266774   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:17.266934   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:17.267101   58524 main.go:141] libmachine: Using SSH client type: native
	I0930 20:54:17.267260   58524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:54:17.267275   58524 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:54:21.739351   57353 pod_ready.go:103] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:24.240665   57353 pod_ready.go:103] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:23.933599   58154 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96 6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530 6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075 9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed 9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0 3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef 5969820fe0736a00a318e9b88b2319ea55d502b90436a825975a302c6173eabb 1c05a5808f1c5162b4edabbe09b054deaf4e4132d32fb5043eecce39de72d79d 41abecf47c6f06d0bac09a7486a88e10860a7927ffef9d17eb26914150612dff 5cd7542053b678d02c4e06433340f520eb5c20afae4506e70b38a99eec3440ca aff264368368eaa98382498c90eb66582bf30743b58a08cb766ca025cf1abe7e: (20.506248045s)
	W0930 20:54:23.933689   58154 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96 6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530 6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075 9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed 9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0 3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef 5969820fe0736a00a318e9b88b2319ea55d502b90436a825975a302c6173eabb 1c05a5808f1c5162b4edabbe09b054deaf4e4132d32fb5043eecce39de72d79d 41abecf47c6f06d0bac09a7486a88e10860a7927ffef9d17eb26914150612dff 5cd7542053b678d02c4e06433340f520eb5c20afae4506e70b38a99eec3440ca aff264368368eaa98382498c90eb66582bf30743b58a08cb766ca025cf1abe7e: Process exited with status 1
	stdout:
	d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96
	6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530
	6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f
	a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075
	9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed
	9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0
	
	stderr:
	E0930 20:54:23.923036    2779 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef\": container with ID starting with 3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef not found: ID does not exist" containerID="3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef"
	time="2024-09-30T20:54:23Z" level=fatal msg="stopping the container \"3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef\": rpc error: code = NotFound desc = could not find container \"3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef\": container with ID starting with 3dd5b47d40232da9e1c3db3b0a185514e55a7900b90f79ccf82e82e7f14574ef not found: ID does not exist"
	I0930 20:54:23.933756   58154 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 20:54:23.975089   58154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 20:54:23.985435   58154 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Sep 30 20:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Sep 30 20:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep 30 20:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Sep 30 20:53 /etc/kubernetes/scheduler.conf
	
	I0930 20:54:23.985505   58154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 20:54:23.994760   58154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 20:54:24.003280   58154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 20:54:24.012427   58154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0930 20:54:24.012496   58154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 20:54:24.021201   58154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 20:54:24.031196   58154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0930 20:54:24.031260   58154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 20:54:24.041279   58154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 20:54:24.051274   58154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:54:24.107678   58154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:54:24.995144   58154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:54:25.207853   58154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:54:25.277812   58154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:54:25.360328   58154 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:54:25.360418   58154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:54:25.860671   58154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:54:26.361137   58154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:54:26.378127   58154 api_server.go:72] duration metric: took 1.017796378s to wait for apiserver process to appear ...
	I0930 20:54:26.378154   58154 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:54:26.378188   58154 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0930 20:54:28.491915   58154 api_server.go:279] https://192.168.61.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 20:54:28.491961   58154 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 20:54:28.491975   58154 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0930 20:54:28.502941   58154 api_server.go:279] https://192.168.61.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 20:54:28.502969   58154 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 20:54:28.878438   58154 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0930 20:54:28.884801   58154 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 20:54:28.884834   58154 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 20:54:29.378360   58154 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0930 20:54:29.385945   58154 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 20:54:29.385975   58154 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 20:54:29.878611   58154 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
	I0930 20:54:29.885680   58154 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
	ok
	I0930 20:54:29.894297   58154 api_server.go:141] control plane version: v1.31.1
	I0930 20:54:29.894332   58154 api_server.go:131] duration metric: took 3.516169713s to wait for apiserver health ...
	I0930 20:54:29.894342   58154 cni.go:84] Creating CNI manager for ""
	I0930 20:54:29.894350   58154 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:54:29.896194   58154 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 20:54:27.106159   58524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:54:27.106208   58524 machine.go:96] duration metric: took 10.559182632s to provisionDockerMachine
	I0930 20:54:27.106224   58524 start.go:293] postStartSetup for "kubernetes-upgrade-810093" (driver="kvm2")
	I0930 20:54:27.106239   58524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:54:27.106275   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:27.106615   58524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:54:27.106642   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:27.109339   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.109735   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:27.109772   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.109964   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:27.110146   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:27.110306   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:27.110432   58524 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa Username:docker}
	I0930 20:54:27.196865   58524 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:54:27.201655   58524 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:54:27.201686   58524 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:54:27.201767   58524 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:54:27.201860   58524 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:54:27.201979   58524 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:54:27.211444   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:54:27.239645   58524 start.go:296] duration metric: took 133.405084ms for postStartSetup
	I0930 20:54:27.239737   58524 fix.go:56] duration metric: took 10.715191174s for fixHost
	I0930 20:54:27.239766   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:27.242929   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.243280   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:27.243323   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.243464   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:27.243677   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:27.243796   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:27.243889   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:27.244002   58524 main.go:141] libmachine: Using SSH client type: native
	I0930 20:54:27.244201   58524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0930 20:54:27.244217   58524 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:54:27.361170   58524 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727729667.352637287
	
	I0930 20:54:27.361196   58524 fix.go:216] guest clock: 1727729667.352637287
	I0930 20:54:27.361224   58524 fix.go:229] Guest: 2024-09-30 20:54:27.352637287 +0000 UTC Remote: 2024-09-30 20:54:27.239746738 +0000 UTC m=+10.868526598 (delta=112.890549ms)
	I0930 20:54:27.361248   58524 fix.go:200] guest clock delta is within tolerance: 112.890549ms
	I0930 20:54:27.361254   58524 start.go:83] releasing machines lock for "kubernetes-upgrade-810093", held for 10.83672515s
	I0930 20:54:27.361285   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:27.361556   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetIP
	I0930 20:54:27.364452   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.364875   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:27.364910   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.365019   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:27.365552   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:27.365739   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .DriverName
	I0930 20:54:27.365861   58524 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:54:27.365903   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:27.366013   58524 ssh_runner.go:195] Run: cat /version.json
	I0930 20:54:27.366049   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHHostname
	I0930 20:54:27.368953   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.369241   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.369418   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:27.369441   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.369627   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:27.369689   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:27.369723   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:27.369862   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:27.369971   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHPort
	I0930 20:54:27.370034   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:27.370102   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHKeyPath
	I0930 20:54:27.370255   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetSSHUsername
	I0930 20:54:27.370251   58524 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa Username:docker}
	I0930 20:54:27.370409   58524 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/kubernetes-upgrade-810093/id_rsa Username:docker}
	I0930 20:54:27.452636   58524 ssh_runner.go:195] Run: systemctl --version
	I0930 20:54:27.492180   58524 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:54:27.650320   58524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:54:27.655994   58524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:54:27.656071   58524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:54:27.665179   58524 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 20:54:27.665206   58524 start.go:495] detecting cgroup driver to use...
	I0930 20:54:27.665272   58524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:54:27.681182   58524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:54:27.695234   58524 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:54:27.695285   58524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:54:27.709000   58524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:54:27.731623   58524 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:54:28.008715   58524 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:54:28.262521   58524 docker.go:233] disabling docker service ...
	I0930 20:54:28.262616   58524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:54:28.308118   58524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:54:28.353775   58524 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:54:28.646774   58524 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:54:28.932430   58524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:54:28.980353   58524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:54:29.021273   58524 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 20:54:29.021724   58524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.056522   58524 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:54:29.056599   58524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.075013   58524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.123612   58524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.142264   58524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:54:29.178750   58524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.223682   58524 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.282890   58524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:54:29.325240   58524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:54:29.345470   58524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:54:29.360498   58524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:54:29.612610   58524 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:54:30.333439   58524 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:54:30.333515   58524 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:54:30.338136   58524 start.go:563] Will wait 60s for crictl version
	I0930 20:54:30.338203   58524 ssh_runner.go:195] Run: which crictl
	I0930 20:54:30.341700   58524 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:54:30.377939   58524 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:54:30.378012   58524 ssh_runner.go:195] Run: crio --version
	I0930 20:54:30.406111   58524 ssh_runner.go:195] Run: crio --version
	I0930 20:54:30.438503   58524 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 20:54:26.739719   57353 pod_ready.go:103] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:28.739915   57353 pod_ready.go:103] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:30.241347   57353 pod_ready.go:93] pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace has status "Ready":"True"
	I0930 20:54:30.241379   57353 pod_ready.go:82] duration metric: took 36.008617454s for pod "coredns-7c65d6cfc9-tczkl" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.241391   57353 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.247872   57353 pod_ready.go:93] pod "etcd-auto-207733" in "kube-system" namespace has status "Ready":"True"
	I0930 20:54:30.247947   57353 pod_ready.go:82] duration metric: took 6.5466ms for pod "etcd-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.248041   57353 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.256663   57353 pod_ready.go:93] pod "kube-apiserver-auto-207733" in "kube-system" namespace has status "Ready":"True"
	I0930 20:54:30.256689   57353 pod_ready.go:82] duration metric: took 8.617328ms for pod "kube-apiserver-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.256700   57353 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.261034   57353 pod_ready.go:93] pod "kube-controller-manager-auto-207733" in "kube-system" namespace has status "Ready":"True"
	I0930 20:54:30.261055   57353 pod_ready.go:82] duration metric: took 4.348056ms for pod "kube-controller-manager-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.261064   57353 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-z2mt2" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.265186   57353 pod_ready.go:93] pod "kube-proxy-z2mt2" in "kube-system" namespace has status "Ready":"True"
	I0930 20:54:30.265214   57353 pod_ready.go:82] duration metric: took 4.136042ms for pod "kube-proxy-z2mt2" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.265223   57353 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.637769   57353 pod_ready.go:93] pod "kube-scheduler-auto-207733" in "kube-system" namespace has status "Ready":"True"
	I0930 20:54:30.637797   57353 pod_ready.go:82] duration metric: took 372.567298ms for pod "kube-scheduler-auto-207733" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:30.637807   57353 pod_ready.go:39] duration metric: took 38.438907918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:54:30.637827   57353 api_server.go:52] waiting for apiserver process to appear ...
	I0930 20:54:30.637888   57353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:54:30.654943   57353 api_server.go:72] duration metric: took 39.225945577s to wait for apiserver process to appear ...
	I0930 20:54:30.654976   57353 api_server.go:88] waiting for apiserver healthz status ...
	I0930 20:54:30.654999   57353 api_server.go:253] Checking apiserver healthz at https://192.168.72.4:8443/healthz ...
	I0930 20:54:30.660518   57353 api_server.go:279] https://192.168.72.4:8443/healthz returned 200:
	ok
	I0930 20:54:30.661642   57353 api_server.go:141] control plane version: v1.31.1
	I0930 20:54:30.661671   57353 api_server.go:131] duration metric: took 6.686785ms to wait for apiserver health ...
	I0930 20:54:30.661682   57353 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:54:30.840665   57353 system_pods.go:59] 7 kube-system pods found
	I0930 20:54:30.840691   57353 system_pods.go:61] "coredns-7c65d6cfc9-tczkl" [52468f44-63ba-4980-9cdf-baf67bab43e3] Running
	I0930 20:54:30.840696   57353 system_pods.go:61] "etcd-auto-207733" [c79ec68b-cf11-492d-9ee9-6ec3502e45a1] Running
	I0930 20:54:30.840700   57353 system_pods.go:61] "kube-apiserver-auto-207733" [9dad9b65-d298-4402-b782-84e66da90018] Running
	I0930 20:54:30.840703   57353 system_pods.go:61] "kube-controller-manager-auto-207733" [c4784ae3-59de-4fe4-b205-63482fb4197f] Running
	I0930 20:54:30.840706   57353 system_pods.go:61] "kube-proxy-z2mt2" [f997ca6c-9e4d-4a43-b3a4-ae1f3537dba8] Running
	I0930 20:54:30.840709   57353 system_pods.go:61] "kube-scheduler-auto-207733" [cc5e6cb5-feb1-4872-bab3-ed7a6479acdc] Running
	I0930 20:54:30.840712   57353 system_pods.go:61] "storage-provisioner" [25455db4-3b50-4f09-8636-d6cdc7d5fad6] Running
	I0930 20:54:30.840718   57353 system_pods.go:74] duration metric: took 179.029902ms to wait for pod list to return data ...
	I0930 20:54:30.840726   57353 default_sa.go:34] waiting for default service account to be created ...
	I0930 20:54:31.037209   57353 default_sa.go:45] found service account: "default"
	I0930 20:54:31.037234   57353 default_sa.go:55] duration metric: took 196.501819ms for default service account to be created ...
	I0930 20:54:31.037244   57353 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 20:54:31.241470   57353 system_pods.go:86] 7 kube-system pods found
	I0930 20:54:31.241510   57353 system_pods.go:89] "coredns-7c65d6cfc9-tczkl" [52468f44-63ba-4980-9cdf-baf67bab43e3] Running
	I0930 20:54:31.241521   57353 system_pods.go:89] "etcd-auto-207733" [c79ec68b-cf11-492d-9ee9-6ec3502e45a1] Running
	I0930 20:54:31.241529   57353 system_pods.go:89] "kube-apiserver-auto-207733" [9dad9b65-d298-4402-b782-84e66da90018] Running
	I0930 20:54:31.241536   57353 system_pods.go:89] "kube-controller-manager-auto-207733" [c4784ae3-59de-4fe4-b205-63482fb4197f] Running
	I0930 20:54:31.241542   57353 system_pods.go:89] "kube-proxy-z2mt2" [f997ca6c-9e4d-4a43-b3a4-ae1f3537dba8] Running
	I0930 20:54:31.241548   57353 system_pods.go:89] "kube-scheduler-auto-207733" [cc5e6cb5-feb1-4872-bab3-ed7a6479acdc] Running
	I0930 20:54:31.241555   57353 system_pods.go:89] "storage-provisioner" [25455db4-3b50-4f09-8636-d6cdc7d5fad6] Running
	I0930 20:54:31.241564   57353 system_pods.go:126] duration metric: took 204.31414ms to wait for k8s-apps to be running ...
	I0930 20:54:31.241575   57353 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 20:54:31.241641   57353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:54:31.262030   57353 system_svc.go:56] duration metric: took 20.447247ms WaitForService to wait for kubelet
	I0930 20:54:31.262064   57353 kubeadm.go:582] duration metric: took 39.833072087s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:54:31.262084   57353 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:54:30.439747   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) Calling .GetIP
	I0930 20:54:30.442412   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:30.442798   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:41:fe", ip: ""} in network mk-kubernetes-upgrade-810093: {Iface:virbr1 ExpiryTime:2024-09-30 21:53:42 +0000 UTC Type:0 Mac:52:54:00:dc:41:fe Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:kubernetes-upgrade-810093 Clientid:01:52:54:00:dc:41:fe}
	I0930 20:54:30.442821   58524 main.go:141] libmachine: (kubernetes-upgrade-810093) DBG | domain kubernetes-upgrade-810093 has defined IP address 192.168.39.233 and MAC address 52:54:00:dc:41:fe in network mk-kubernetes-upgrade-810093
	I0930 20:54:30.443031   58524 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 20:54:30.447209   58524 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-810093 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-810093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 20:54:30.447323   58524 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 20:54:30.447390   58524 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:54:30.490340   58524 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:54:30.490370   58524 crio.go:433] Images already preloaded, skipping extraction
	I0930 20:54:30.490430   58524 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:54:30.532235   58524 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 20:54:30.532294   58524 cache_images.go:84] Images are preloaded, skipping loading
	I0930 20:54:30.532303   58524 kubeadm.go:934] updating node { 192.168.39.233 8443 v1.31.1 crio true true} ...
	I0930 20:54:30.532424   58524 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-810093 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-810093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:54:30.532496   58524 ssh_runner.go:195] Run: crio config
	I0930 20:54:30.581350   58524 cni.go:84] Creating CNI manager for ""
	I0930 20:54:30.581378   58524 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:54:30.581389   58524 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 20:54:30.581419   58524 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.233 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-810093 NodeName:kubernetes-upgrade-810093 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 20:54:30.581547   58524 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.233
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-810093"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 20:54:30.581602   58524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 20:54:30.591728   58524 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 20:54:30.591798   58524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 20:54:30.600603   58524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0930 20:54:30.617718   58524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:54:30.640781   58524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0930 20:54:30.658851   58524 ssh_runner.go:195] Run: grep 192.168.39.233	control-plane.minikube.internal$ /etc/hosts
	I0930 20:54:30.662848   58524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:54:30.819972   58524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:54:30.837304   58524 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093 for IP: 192.168.39.233
	I0930 20:54:30.837331   58524 certs.go:194] generating shared ca certs ...
	I0930 20:54:30.837351   58524 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:54:30.837516   58524 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:54:30.837561   58524 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:54:30.837574   58524 certs.go:256] generating profile certs ...
	I0930 20:54:30.837671   58524 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/client.key
	I0930 20:54:30.837740   58524 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.key.372be7b4
	I0930 20:54:30.837788   58524 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.key
	I0930 20:54:30.837957   58524 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:54:30.837994   58524 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:54:30.838002   58524 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:54:30.838035   58524 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:54:30.838083   58524 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:54:30.838118   58524 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:54:30.838178   58524 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:54:30.838969   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:54:30.865772   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:54:30.890599   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:54:30.915638   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:54:30.944893   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0930 20:54:30.972527   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 20:54:31.004423   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:54:31.102054   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kubernetes-upgrade-810093/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 20:54:31.157884   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:54:31.297005   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:54:31.384812   58524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:54:31.438441   57353 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:54:31.438476   57353 node_conditions.go:123] node cpu capacity is 2
	I0930 20:54:31.438489   57353 node_conditions.go:105] duration metric: took 176.400025ms to run NodePressure ...
	I0930 20:54:31.438503   57353 start.go:241] waiting for startup goroutines ...
	I0930 20:54:31.438512   57353 start.go:246] waiting for cluster config update ...
	I0930 20:54:31.438526   57353 start.go:255] writing updated cluster config ...
	I0930 20:54:31.438882   57353 ssh_runner.go:195] Run: rm -f paused
	I0930 20:54:31.499345   57353 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 20:54:31.501096   57353 out.go:177] * Done! kubectl is now configured to use "auto-207733" cluster and "default" namespace by default
	I0930 20:54:29.898157   58154 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 20:54:29.910108   58154 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 20:54:29.941701   58154 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 20:54:29.941781   58154 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 20:54:29.941802   58154 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 20:54:29.960562   58154 system_pods.go:59] 6 kube-system pods found
	I0930 20:54:29.960603   58154 system_pods.go:61] "coredns-7c65d6cfc9-7jtvv" [217905ec-7c19-4f7f-93a2-a2d868627822] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 20:54:29.960614   58154 system_pods.go:61] "etcd-pause-617008" [c80a4218-bec8-45d2-be81-dccaefc8667e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 20:54:29.960627   58154 system_pods.go:61] "kube-apiserver-pause-617008" [d3d6691c-dfc5-499a-91ea-0a032963226b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 20:54:29.960636   58154 system_pods.go:61] "kube-controller-manager-pause-617008" [ea889f38-5e38-43b2-bb57-80c8021aa6b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 20:54:29.960642   58154 system_pods.go:61] "kube-proxy-mpb8x" [e1a0b9be-01bb-4b6d-ba51-f0982b68ef99] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 20:54:29.960650   58154 system_pods.go:61] "kube-scheduler-pause-617008" [c663602c-9191-4c37-8426-c1d0f489f981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 20:54:29.960657   58154 system_pods.go:74] duration metric: took 18.934046ms to wait for pod list to return data ...
	I0930 20:54:29.960667   58154 node_conditions.go:102] verifying NodePressure condition ...
	I0930 20:54:29.966206   58154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 20:54:29.966260   58154 node_conditions.go:123] node cpu capacity is 2
	I0930 20:54:29.966275   58154 node_conditions.go:105] duration metric: took 5.603521ms to run NodePressure ...
	I0930 20:54:29.966297   58154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 20:54:30.234915   58154 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 20:54:30.240125   58154 kubeadm.go:739] kubelet initialised
	I0930 20:54:30.240159   58154 kubeadm.go:740] duration metric: took 5.216089ms waiting for restarted kubelet to initialise ...
	I0930 20:54:30.240170   58154 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 20:54:30.245491   58154 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7jtvv" in "kube-system" namespace to be "Ready" ...
	I0930 20:54:32.254513   58154 pod_ready.go:103] pod "coredns-7c65d6cfc9-7jtvv" in "kube-system" namespace has status "Ready":"False"
	I0930 20:54:31.533047   58524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 20:54:31.597876   58524 ssh_runner.go:195] Run: openssl version
	I0930 20:54:31.636045   58524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:54:31.661698   58524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:54:31.669043   58524 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:54:31.669103   58524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:54:31.678558   58524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:54:31.695384   58524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:54:31.714372   58524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:54:31.726942   58524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:54:31.727005   58524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:54:31.741475   58524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:54:31.757991   58524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:54:31.797856   58524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:54:31.810783   58524 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:54:31.810848   58524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:54:31.824337   58524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:54:31.844720   58524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:54:31.851869   58524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 20:54:31.860036   58524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 20:54:31.869642   58524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 20:54:31.883657   58524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 20:54:31.904877   58524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 20:54:31.917114   58524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 20:54:31.930684   58524 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-810093 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-810093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:54:31.930802   58524 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 20:54:31.930864   58524 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 20:54:32.021788   58524 cri.go:89] found id: "c50a6855e5a06b82abda38e7eee5fddd157ea694fed5ff2e3ae6a698d2e775aa"
	I0930 20:54:32.021813   58524 cri.go:89] found id: "001a67edfd9f5a38b0b3e7f99a5d666ab33b2bdeaa3c2dea2d16622669526006"
	I0930 20:54:32.021819   58524 cri.go:89] found id: "5fbaf5340219dd96a03e65f0db4a8f2425f3922473b49a407d6261c8b3081d6c"
	I0930 20:54:32.021825   58524 cri.go:89] found id: "67efcceedcc1938ff6f66b0ea6cc17e34bda9faafb53db30f477c06b6e67ce7e"
	I0930 20:54:32.021829   58524 cri.go:89] found id: "8a55eb55be7941855076f7edfd053f259d99b7260e868cee65a9ba01a4c35171"
	I0930 20:54:32.021833   58524 cri.go:89] found id: "7a5f2f3caa6596bdeec851c98c42549151d6a8fa20b96b21a5a850b7c0e5424c"
	I0930 20:54:32.021836   58524 cri.go:89] found id: "7b10de2fc402b0e70a6a180c0350151ea8f75757a4a692274cbf1cc58d95e9b9"
	I0930 20:54:32.021840   58524 cri.go:89] found id: "ebecc3e71f4007d0365b4c87543b72b9d2c5f5ba7d29835a0ce6761e88986df4"
	I0930 20:54:32.021845   58524 cri.go:89] found id: "10d74335ce942a9f9482799a8e322e5314127630a6cac376ae8923cf08c520db"
	I0930 20:54:32.021853   58524 cri.go:89] found id: ""
	I0930 20:54:32.021897   58524 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.553665952Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-7jtvv,Uid:217905ec-7c19-4f7f-93a2-a2d868627822,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727729641916484214,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:53:24.536458652Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&PodSandboxMetadata{Name:kube-proxy-mpb8x,Uid:e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1727729641724694183,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T20:53:24.346772401Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-617008,Uid:1d4f47859e5a7d3a1cd398085690521f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727729641688753553,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 1d4f47859e5a7d3a1cd398085690521f,kubernetes.io/config.seen: 2024-09-30T20:53:19.377027421Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&PodSandboxMetadata{Name:etcd-pause-617008,Uid:5594b5e708f2affed72a7af2f99cca8e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727729641663797949,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.245:2379,kubernetes.io/config.hash: 5594b5e708f2affed72a7af2f99cca8e,kubernetes.io/config.seen: 2024-09-30T20:53:19.377022611Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1a629caf531f7655e19484faadbda271a
da22f8a4668adf899906322dbbe77ba,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-617008,Uid:0a10faa854dc67807f3807fae1a77827,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727729641656462559,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a10faa854dc67807f3807fae1a77827,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.245:8443,kubernetes.io/config.hash: 0a10faa854dc67807f3807fae1a77827,kubernetes.io/config.seen: 2024-09-30T20:53:19.377026269Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-617008,Uid:d0a77a92f96646fe47038b8dae296ffa,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727729641614862902,Lab
els:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d0a77a92f96646fe47038b8dae296ffa,kubernetes.io/config.seen: 2024-09-30T20:53:19.377028240Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=138f62c3-f1c5-4a1e-9ae8-06159c5ba562 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.554718516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ba2b97d-1c78-462b-9219-45f955e18d85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.554797296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ba2b97d-1c78-462b-9219-45f955e18d85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.555402220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7118ce479d2635a4e7550c31a9d22c2f863135d554de6a46c77aa7b8b1237af6,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727729669638597851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a38406394d57b781d00f1c6949dda32f0bc7ab2d9c8022aa6525eeab5699fd,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729669643200532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef070e92a1237334653cb1f54677b6077ac4c0716c930e4acbc64d66c2e718e3,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727729665820243636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5070854549a909ce0263bb405f59a9c881a9590d7303e44d3bcb7695008b3aa2,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727729665819105767,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c406f1b55b82a495fbc3c253efe04199bd5a617299f3bd86ece750a5574c725,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727729665807579665,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84c76c04fe41849783f07c21abf853b99edbd2ea733aae494635d5daf54a2ac8,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727729665797040371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727729642140241359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729642778719998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727729642138848066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727729642117186921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727729642093702904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727729642044669077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 0a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ba2b97d-1c78-462b-9219-45f955e18d85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.565379744Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14d33891-c423-4cf5-9375-8373b31ed5fc name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.565482230Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14d33891-c423-4cf5-9375-8373b31ed5fc name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.567202730Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80c9417a-2075-4a24-ae46-50b1c290cc42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.567719364Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729688567685492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80c9417a-2075-4a24-ae46-50b1c290cc42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.568292080Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce175ddd-fe38-4373-8e29-a976f724cd7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.568364953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce175ddd-fe38-4373-8e29-a976f724cd7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.568731408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7118ce479d2635a4e7550c31a9d22c2f863135d554de6a46c77aa7b8b1237af6,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727729669638597851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a38406394d57b781d00f1c6949dda32f0bc7ab2d9c8022aa6525eeab5699fd,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729669643200532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef070e92a1237334653cb1f54677b6077ac4c0716c930e4acbc64d66c2e718e3,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727729665820243636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5070854549a909ce0263bb405f59a9c881a9590d7303e44d3bcb7695008b3aa2,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727729665819105767,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c406f1b55b82a495fbc3c253efe04199bd5a617299f3bd86ece750a5574c725,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727729665807579665,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84c76c04fe41849783f07c21abf853b99edbd2ea733aae494635d5daf54a2ac8,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727729665797040371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727729642140241359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729642778719998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727729642138848066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727729642117186921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727729642093702904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727729642044669077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 0a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce175ddd-fe38-4373-8e29-a976f724cd7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.615523364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b34b440-87b6-4f79-bf60-5e509ba26d85 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.615656344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b34b440-87b6-4f79-bf60-5e509ba26d85 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.617160200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=517f15cf-aea2-4147-897a-177da0fd70d8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.618176730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729688618141843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=517f15cf-aea2-4147-897a-177da0fd70d8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.619091723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ba65d46-b106-47b2-94d2-85eb462e63eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.619188806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ba65d46-b106-47b2-94d2-85eb462e63eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.619724576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7118ce479d2635a4e7550c31a9d22c2f863135d554de6a46c77aa7b8b1237af6,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727729669638597851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a38406394d57b781d00f1c6949dda32f0bc7ab2d9c8022aa6525eeab5699fd,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729669643200532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef070e92a1237334653cb1f54677b6077ac4c0716c930e4acbc64d66c2e718e3,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727729665820243636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5070854549a909ce0263bb405f59a9c881a9590d7303e44d3bcb7695008b3aa2,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727729665819105767,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c406f1b55b82a495fbc3c253efe04199bd5a617299f3bd86ece750a5574c725,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727729665807579665,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84c76c04fe41849783f07c21abf853b99edbd2ea733aae494635d5daf54a2ac8,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727729665797040371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727729642140241359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729642778719998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727729642138848066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727729642117186921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727729642093702904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727729642044669077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 0a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ba65d46-b106-47b2-94d2-85eb462e63eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.673343057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72e57a61-af2c-43da-9190-21a114c32cf1 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.673467817Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72e57a61-af2c-43da-9190-21a114c32cf1 name=/runtime.v1.RuntimeService/Version
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.675423782Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef8fb0c7-13fc-439f-8bdf-1e1aadc08765 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.676255664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729688676218153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef8fb0c7-13fc-439f-8bdf-1e1aadc08765 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.677131360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbc0bb34-a546-4e83-8715-0fd403d251fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.677194903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbc0bb34-a546-4e83-8715-0fd403d251fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 20:54:48 pause-617008 crio[2080]: time="2024-09-30 20:54:48.677430836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7118ce479d2635a4e7550c31a9d22c2f863135d554de6a46c77aa7b8b1237af6,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727729669638597851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a38406394d57b781d00f1c6949dda32f0bc7ab2d9c8022aa6525eeab5699fd,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727729669643200532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef070e92a1237334653cb1f54677b6077ac4c0716c930e4acbc64d66c2e718e3,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727729665820243636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5070854549a909ce0263bb405f59a9c881a9590d7303e44d3bcb7695008b3aa2,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727729665819105767,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c406f1b55b82a495fbc3c253efe04199bd5a617299f3bd86ece750a5574c725,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727729665807579665,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84c76c04fe41849783f07c21abf853b99edbd2ea733aae494635d5daf54a2ac8,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727729665797040371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530,PodSandboxId:830f6c48dccb2bafa9a0da3f81fddcd59a40b6c83d432776800f1f6ad0f38394,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727729642140241359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mpb8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1a0b9be-01bb-4b6d-ba51-f0982b68ef99,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc
59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96,PodSandboxId:097125b65354f6bea232c4bc2eaa655ae9620557568c3ceca6dcd951b17896ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727729642778719998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7jtvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 217905ec-7c19-4f7f-93a2-a2d868627822,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f,PodSandboxId:d045b6935a02b14ac6db9d19d29e69e901c3718237863737bb3419279a95a8ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727729642138848066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-617008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0a77a92f96646fe47038b8dae296ffa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075,PodSandboxId:f47dc8cbc8ce8db9fd879d3b6b5038c43fcde9bc7377d1c8fd4a73b438ea7797,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727729642117186921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617008,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5594b5e708f2affed72a7af2f99cca8e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed,PodSandboxId:6cf1ec062fe1568493be42df5db163eafe6645195f6f4ef353452b76e0858bfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727729642093702904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617008,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 1d4f47859e5a7d3a1cd398085690521f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0,PodSandboxId:1a629caf531f7655e19484faadbda271ada22f8a4668adf899906322dbbe77ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727729642044669077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617008,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 0a10faa854dc67807f3807fae1a77827,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cbc0bb34-a546-4e83-8715-0fd403d251fd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b5a38406394d5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   19 seconds ago      Running             coredns                   2                   097125b65354f       coredns-7c65d6cfc9-7jtvv
	7118ce479d263       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   19 seconds ago      Running             kube-proxy                2                   830f6c48dccb2       kube-proxy-mpb8x
	ef070e92a1237       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   22 seconds ago      Running             kube-controller-manager   2                   6cf1ec062fe15       kube-controller-manager-pause-617008
	5070854549a90       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   22 seconds ago      Running             kube-apiserver            2                   1a629caf531f7       kube-apiserver-pause-617008
	1c406f1b55b82       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   22 seconds ago      Running             etcd                      2                   f47dc8cbc8ce8       etcd-pause-617008
	84c76c04fe418       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   22 seconds ago      Running             kube-scheduler            2                   d045b6935a02b       kube-scheduler-pause-617008
	d296639a17992       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   45 seconds ago      Exited              coredns                   1                   097125b65354f       coredns-7c65d6cfc9-7jtvv
	6df05d8d5b78e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   46 seconds ago      Exited              kube-proxy                1                   830f6c48dccb2       kube-proxy-mpb8x
	6f035fcf4f276       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   46 seconds ago      Exited              kube-scheduler            1                   d045b6935a02b       kube-scheduler-pause-617008
	a84693bd8b3d5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   46 seconds ago      Exited              etcd                      1                   f47dc8cbc8ce8       etcd-pause-617008
	9a3af029d6ae8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   46 seconds ago      Exited              kube-controller-manager   1                   6cf1ec062fe15       kube-controller-manager-pause-617008
	9a8345e346ba0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   46 seconds ago      Exited              kube-apiserver            1                   1a629caf531f7       kube-apiserver-pause-617008
	
	
	==> coredns [b5a38406394d57b781d00f1c6949dda32f0bc7ab2d9c8022aa6525eeab5699fd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38314 - 62705 "HINFO IN 2204494186670646042.879768081321963679. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.018203503s
	
	
	==> coredns [d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:52321 - 34001 "HINFO IN 5517243117269935179.7094196997823424969. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010455896s
	
	
	==> describe nodes <==
	Name:               pause-617008
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-617008
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=pause-617008
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T20_53_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:53:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-617008
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 20:54:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 20:54:28 +0000   Mon, 30 Sep 2024 20:53:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 20:54:28 +0000   Mon, 30 Sep 2024 20:53:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 20:54:28 +0000   Mon, 30 Sep 2024 20:53:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 20:54:28 +0000   Mon, 30 Sep 2024 20:53:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.245
	  Hostname:    pause-617008
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b8c8ed9110b44729164b24a1bed867e
	  System UUID:                4b8c8ed9-110b-4472-9164-b24a1bed867e
	  Boot ID:                    d897aff9-f75f-466e-820f-0921e36f5ff6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7jtvv                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     84s
	  kube-system                 etcd-pause-617008                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         89s
	  kube-system                 kube-apiserver-pause-617008             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-pause-617008    200m (10%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-mpb8x                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-pause-617008             100m (5%)     0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 82s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 42s                kube-proxy       
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node pause-617008 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node pause-617008 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                kubelet          Node pause-617008 status is now: NodeHasSufficientPID
	  Normal  NodeReady                88s                kubelet          Node pause-617008 status is now: NodeReady
	  Normal  RegisteredNode           85s                node-controller  Node pause-617008 event: Registered Node pause-617008 in Controller
	  Normal  RegisteredNode           39s                node-controller  Node pause-617008 event: Registered Node pause-617008 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-617008 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-617008 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-617008 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                node-controller  Node pause-617008 event: Registered Node pause-617008 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep30 20:53] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.072884] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058885] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.203009] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.121454] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.283580] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.987717] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +4.886427] systemd-fstab-generator[886]: Ignoring "noauto" option for root device
	[  +0.063602] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.495300] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[  +0.094466] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.732538] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.921452] kauditd_printk_skb: 43 callbacks suppressed
	[ +34.437837] systemd-fstab-generator[2003]: Ignoring "noauto" option for root device
	[  +0.088962] kauditd_printk_skb: 45 callbacks suppressed
	[  +0.087541] systemd-fstab-generator[2015]: Ignoring "noauto" option for root device
	[  +0.181814] systemd-fstab-generator[2029]: Ignoring "noauto" option for root device
	[  +0.138112] systemd-fstab-generator[2042]: Ignoring "noauto" option for root device
	[  +0.267697] systemd-fstab-generator[2070]: Ignoring "noauto" option for root device
	[Sep30 20:54] systemd-fstab-generator[2269]: Ignoring "noauto" option for root device
	[  +5.166793] kauditd_printk_skb: 195 callbacks suppressed
	[ +18.133791] systemd-fstab-generator[3120]: Ignoring "noauto" option for root device
	[  +8.546103] kauditd_printk_skb: 46 callbacks suppressed
	[  +9.205476] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	
	
	==> etcd [1c406f1b55b82a495fbc3c253efe04199bd5a617299f3bd86ece750a5574c725] <==
	{"level":"info","ts":"2024-09-30T20:54:26.083697Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-30T20:54:26.083721Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-30T20:54:26.083741Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-30T20:54:26.083914Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.245:2380"}
	{"level":"info","ts":"2024-09-30T20:54:26.083922Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.245:2380"}
	{"level":"info","ts":"2024-09-30T20:54:26.085307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 switched to configuration voters=(16267170017011773379)"}
	{"level":"info","ts":"2024-09-30T20:54:26.087033Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f0ed87b681c0ac99","local-member-id":"e1c098f17cdf2fc3","added-peer-id":"e1c098f17cdf2fc3","added-peer-peer-urls":["https://192.168.61.245:2380"]}
	{"level":"info","ts":"2024-09-30T20:54:26.087312Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0ed87b681c0ac99","local-member-id":"e1c098f17cdf2fc3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T20:54:26.087946Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T20:54:27.156745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:27.156861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:27.156921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 received MsgPreVoteResp from e1c098f17cdf2fc3 at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:27.156956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 became candidate at term 4"}
	{"level":"info","ts":"2024-09-30T20:54:27.156981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 received MsgVoteResp from e1c098f17cdf2fc3 at term 4"}
	{"level":"info","ts":"2024-09-30T20:54:27.157008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 became leader at term 4"}
	{"level":"info","ts":"2024-09-30T20:54:27.157034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e1c098f17cdf2fc3 elected leader e1c098f17cdf2fc3 at term 4"}
	{"level":"info","ts":"2024-09-30T20:54:27.161631Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e1c098f17cdf2fc3","local-member-attributes":"{Name:pause-617008 ClientURLs:[https://192.168.61.245:2379]}","request-path":"/0/members/e1c098f17cdf2fc3/attributes","cluster-id":"f0ed87b681c0ac99","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T20:54:27.161874Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:54:27.162768Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:54:27.163122Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T20:54:27.163161Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T20:54:27.163172Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:54:27.163688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T20:54:27.165573Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:54:27.166329Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.245:2379"}
	
	
	==> etcd [a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075] <==
	{"level":"info","ts":"2024-09-30T20:54:04.208595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T20:54:04.208611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 received MsgPreVoteResp from e1c098f17cdf2fc3 at term 2"}
	{"level":"info","ts":"2024-09-30T20:54:04.208627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 became candidate at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:04.208660Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 received MsgVoteResp from e1c098f17cdf2fc3 at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:04.208679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1c098f17cdf2fc3 became leader at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:04.208699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e1c098f17cdf2fc3 elected leader e1c098f17cdf2fc3 at term 3"}
	{"level":"info","ts":"2024-09-30T20:54:04.210453Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e1c098f17cdf2fc3","local-member-attributes":"{Name:pause-617008 ClientURLs:[https://192.168.61.245:2379]}","request-path":"/0/members/e1c098f17cdf2fc3/attributes","cluster-id":"f0ed87b681c0ac99","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T20:54:04.210684Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:54:04.211110Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T20:54:04.211858Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:54:04.212809Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T20:54:04.213565Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T20:54:04.214134Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T20:54:04.214161Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T20:54:04.222459Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.245:2379"}
	{"level":"info","ts":"2024-09-30T20:54:13.530740Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-30T20:54:13.530783Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-617008","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.245:2380"],"advertise-client-urls":["https://192.168.61.245:2379"]}
	{"level":"warn","ts":"2024-09-30T20:54:13.530878Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:54:13.530990Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:54:13.549686Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.245:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T20:54:13.549735Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.245:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-30T20:54:13.551162Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e1c098f17cdf2fc3","current-leader-member-id":"e1c098f17cdf2fc3"}
	{"level":"info","ts":"2024-09-30T20:54:13.554509Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.61.245:2380"}
	{"level":"info","ts":"2024-09-30T20:54:13.554604Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.61.245:2380"}
	{"level":"info","ts":"2024-09-30T20:54:13.554614Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-617008","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.245:2380"],"advertise-client-urls":["https://192.168.61.245:2379"]}
	
	
	==> kernel <==
	 20:54:49 up 2 min,  0 users,  load average: 0.61, 0.31, 0.12
	Linux pause-617008 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5070854549a909ce0263bb405f59a9c881a9590d7303e44d3bcb7695008b3aa2] <==
	I0930 20:54:28.558796       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 20:54:28.561266       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 20:54:28.561783       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 20:54:28.574668       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0930 20:54:28.575699       1 aggregator.go:171] initial CRD sync complete...
	I0930 20:54:28.575730       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 20:54:28.575739       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 20:54:28.575746       1 cache.go:39] Caches are synced for autoregister controller
	I0930 20:54:28.585627       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 20:54:28.585753       1 policy_source.go:224] refreshing policies
	I0930 20:54:28.593535       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 20:54:28.593891       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0930 20:54:28.610733       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0930 20:54:28.636571       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 20:54:28.641186       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0930 20:54:28.641218       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 20:54:28.646489       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0930 20:54:29.445431       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0930 20:54:30.095929       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 20:54:30.111710       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 20:54:30.146736       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 20:54:30.185551       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0930 20:54:30.193932       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0930 20:54:33.624807       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 20:54:33.628339       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0] <==
	W0930 20:54:22.845207       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:22.869200       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:22.891175       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:22.918443       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:22.938317       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:22.955166       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.030605       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.038308       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.104801       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.116430       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.127853       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.170042       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.195491       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.234718       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.260521       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.319217       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.344015       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.351584       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.412302       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.428607       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.465443       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.482615       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.537332       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.708862       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 20:54:23.843466       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed] <==
	I0930 20:54:09.008399       1 shared_informer.go:320] Caches are synced for node
	I0930 20:54:09.008469       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0930 20:54:09.008487       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0930 20:54:09.008492       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0930 20:54:09.008497       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0930 20:54:09.008581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-617008"
	I0930 20:54:09.010109       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0930 20:54:09.011309       1 shared_informer.go:320] Caches are synced for HPA
	I0930 20:54:09.021539       1 shared_informer.go:320] Caches are synced for job
	I0930 20:54:09.027967       1 shared_informer.go:320] Caches are synced for cronjob
	I0930 20:54:09.038283       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0930 20:54:09.051702       1 shared_informer.go:320] Caches are synced for daemon sets
	I0930 20:54:09.054096       1 shared_informer.go:320] Caches are synced for taint
	I0930 20:54:09.054279       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0930 20:54:09.054359       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-617008"
	I0930 20:54:09.054460       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0930 20:54:09.102111       1 shared_informer.go:320] Caches are synced for deployment
	I0930 20:54:09.142678       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0930 20:54:09.143032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.287µs"
	I0930 20:54:09.157902       1 shared_informer.go:320] Caches are synced for disruption
	I0930 20:54:09.180875       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 20:54:09.222511       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 20:54:09.617230       1 shared_informer.go:320] Caches are synced for garbage collector
	I0930 20:54:09.617511       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0930 20:54:09.647921       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [ef070e92a1237334653cb1f54677b6077ac4c0716c930e4acbc64d66c2e718e3] <==
	I0930 20:54:32.014663       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0930 20:54:32.014803       1 shared_informer.go:320] Caches are synced for daemon sets
	I0930 20:54:32.015281       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-617008"
	I0930 20:54:32.017181       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0930 20:54:32.023513       1 shared_informer.go:320] Caches are synced for TTL
	I0930 20:54:32.034792       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0930 20:54:32.058120       1 shared_informer.go:320] Caches are synced for GC
	I0930 20:54:32.062905       1 shared_informer.go:320] Caches are synced for attach detach
	I0930 20:54:32.066151       1 shared_informer.go:320] Caches are synced for persistent volume
	I0930 20:54:32.068035       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0930 20:54:32.070367       1 shared_informer.go:320] Caches are synced for node
	I0930 20:54:32.070479       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0930 20:54:32.070587       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0930 20:54:32.070611       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0930 20:54:32.070683       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0930 20:54:32.070892       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-617008"
	I0930 20:54:32.113367       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0930 20:54:32.170521       1 shared_informer.go:320] Caches are synced for endpoint
	I0930 20:54:32.182999       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 20:54:32.222269       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 20:54:32.639454       1 shared_informer.go:320] Caches are synced for garbage collector
	I0930 20:54:32.642860       1 shared_informer.go:320] Caches are synced for garbage collector
	I0930 20:54:32.642959       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0930 20:54:33.640570       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="33.160597ms"
	I0930 20:54:33.640683       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="54.762µs"
	
	
	==> kube-proxy [6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530] <==
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:54:03.797606       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:54:05.714405       1 server.go:666] "Failed to retrieve node info" err="nodes \"pause-617008\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]"
	I0930 20:54:06.770936       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.245"]
	E0930 20:54:06.771016       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:54:06.806765       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:54:06.806805       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:54:06.806831       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:54:06.809895       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:54:06.810720       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:54:06.810801       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:54:06.812767       1 config.go:199] "Starting service config controller"
	I0930 20:54:06.812865       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:54:06.813031       1 config.go:328] "Starting node config controller"
	I0930 20:54:06.813126       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:54:06.813472       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:54:06.813495       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:54:06.913871       1 shared_informer.go:320] Caches are synced for service config
	I0930 20:54:06.913886       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 20:54:06.914045       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7118ce479d2635a4e7550c31a9d22c2f863135d554de6a46c77aa7b8b1237af6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 20:54:29.850119       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 20:54:29.861919       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.245"]
	E0930 20:54:29.862032       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 20:54:29.906548       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 20:54:29.906632       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 20:54:29.906665       1 server_linux.go:169] "Using iptables Proxier"
	I0930 20:54:29.912958       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 20:54:29.913545       1 server.go:483] "Version info" version="v1.31.1"
	I0930 20:54:29.913794       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:54:29.916293       1 config.go:199] "Starting service config controller"
	I0930 20:54:29.916342       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 20:54:29.916417       1 config.go:105] "Starting endpoint slice config controller"
	I0930 20:54:29.916440       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 20:54:29.917022       1 config.go:328] "Starting node config controller"
	I0930 20:54:29.919122       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 20:54:30.016631       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 20:54:30.016714       1 shared_informer.go:320] Caches are synced for service config
	I0930 20:54:30.019720       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f] <==
	E0930 20:54:05.714300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 20:54:05.714412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 20:54:05.714700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 20:54:05.714858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713638       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 20:54:05.717221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713699       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 20:54:05.717288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 20:54:05.717356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 20:54:05.717409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713832       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 20:54:05.717462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 20:54:05.717529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 20:54:05.717585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 20:54:05.713957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 20:54:05.717660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0930 20:54:07.063879       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 20:54:13.491329       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [84c76c04fe41849783f07c21abf853b99edbd2ea733aae494635d5daf54a2ac8] <==
	I0930 20:54:26.674824       1 serving.go:386] Generated self-signed cert in-memory
	W0930 20:54:28.510759       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 20:54:28.510834       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 20:54:28.510844       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 20:54:28.510853       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 20:54:28.605439       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 20:54:28.605472       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 20:54:28.610515       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 20:54:28.615152       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 20:54:28.615208       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 20:54:28.615248       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 20:54:28.715543       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 20:54:25 pause-617008 kubelet[3127]: I0930 20:54:25.702590    3127 kubelet_node_status.go:72] "Attempting to register node" node="pause-617008"
	Sep 30 20:54:25 pause-617008 kubelet[3127]: E0930 20:54:25.703593    3127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.245:8443: connect: connection refused" node="pause-617008"
	Sep 30 20:54:25 pause-617008 kubelet[3127]: I0930 20:54:25.775391    3127 scope.go:117] "RemoveContainer" containerID="6f035fcf4f27669dc97bd27ea15da5fb8a5062c9f148aed60dc3d0994ffbbe1f"
	Sep 30 20:54:25 pause-617008 kubelet[3127]: I0930 20:54:25.775674    3127 scope.go:117] "RemoveContainer" containerID="9a3af029d6ae839e2a040b471d3e46d5839ef9283eadc5cb750c9b32e8f31bed"
	Sep 30 20:54:25 pause-617008 kubelet[3127]: I0930 20:54:25.776976    3127 scope.go:117] "RemoveContainer" containerID="a84693bd8b3d59922102b13e8ce27aa22c99328714d1c62d8213e829134de075"
	Sep 30 20:54:25 pause-617008 kubelet[3127]: I0930 20:54:25.780608    3127 scope.go:117] "RemoveContainer" containerID="9a8345e346ba02d50960ac01b8a2e6a59224ad5086873c2d86abd1ce3fd488e0"
	Sep 30 20:54:25 pause-617008 kubelet[3127]: E0930 20:54:25.947429    3127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-617008?timeout=10s\": dial tcp 192.168.61.245:8443: connect: connection refused" interval="800ms"
	Sep 30 20:54:26 pause-617008 kubelet[3127]: I0930 20:54:26.105693    3127 kubelet_node_status.go:72] "Attempting to register node" node="pause-617008"
	Sep 30 20:54:26 pause-617008 kubelet[3127]: E0930 20:54:26.106695    3127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.245:8443: connect: connection refused" node="pause-617008"
	Sep 30 20:54:26 pause-617008 kubelet[3127]: I0930 20:54:26.908729    3127 kubelet_node_status.go:72] "Attempting to register node" node="pause-617008"
	Sep 30 20:54:28 pause-617008 kubelet[3127]: I0930 20:54:28.691274    3127 kubelet_node_status.go:111] "Node was previously registered" node="pause-617008"
	Sep 30 20:54:28 pause-617008 kubelet[3127]: I0930 20:54:28.691750    3127 kubelet_node_status.go:75] "Successfully registered node" node="pause-617008"
	Sep 30 20:54:28 pause-617008 kubelet[3127]: I0930 20:54:28.691879    3127 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 30 20:54:28 pause-617008 kubelet[3127]: I0930 20:54:28.693137    3127 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 30 20:54:29 pause-617008 kubelet[3127]: I0930 20:54:29.309462    3127 apiserver.go:52] "Watching apiserver"
	Sep 30 20:54:29 pause-617008 kubelet[3127]: I0930 20:54:29.330580    3127 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 30 20:54:29 pause-617008 kubelet[3127]: I0930 20:54:29.385236    3127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1a0b9be-01bb-4b6d-ba51-f0982b68ef99-xtables-lock\") pod \"kube-proxy-mpb8x\" (UID: \"e1a0b9be-01bb-4b6d-ba51-f0982b68ef99\") " pod="kube-system/kube-proxy-mpb8x"
	Sep 30 20:54:29 pause-617008 kubelet[3127]: I0930 20:54:29.385627    3127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1a0b9be-01bb-4b6d-ba51-f0982b68ef99-lib-modules\") pod \"kube-proxy-mpb8x\" (UID: \"e1a0b9be-01bb-4b6d-ba51-f0982b68ef99\") " pod="kube-system/kube-proxy-mpb8x"
	Sep 30 20:54:29 pause-617008 kubelet[3127]: I0930 20:54:29.614445    3127 scope.go:117] "RemoveContainer" containerID="6df05d8d5b78e13b99c7f3d97ae1601970fdb43ec53a5ce16a7849989275e530"
	Sep 30 20:54:29 pause-617008 kubelet[3127]: I0930 20:54:29.615298    3127 scope.go:117] "RemoveContainer" containerID="d296639a17992e0aebc034d97462720012f027289d5c5493e42321b684af7f96"
	Sep 30 20:54:33 pause-617008 kubelet[3127]: I0930 20:54:33.587781    3127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 30 20:54:35 pause-617008 kubelet[3127]: E0930 20:54:35.405021    3127 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729675403264152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:54:35 pause-617008 kubelet[3127]: E0930 20:54:35.405842    3127 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729675403264152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:54:45 pause-617008 kubelet[3127]: E0930 20:54:45.407607    3127 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729685407134606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 20:54:45 pause-617008 kubelet[3127]: E0930 20:54:45.407634    3127 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727729685407134606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 20:54:48.119576   58939 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19736-7672/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-617008 -n pause-617008
helpers_test.go:261: (dbg) Run:  kubectl --context pause-617008 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (76.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (289.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-621406 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-621406 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m49.318681739s)

                                                
                                                
-- stdout --
	* [old-k8s-version-621406] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-621406" primary control-plane node in "old-k8s-version-621406" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 20:57:26.702394   66420 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:57:26.702733   66420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:57:26.702771   66420 out.go:358] Setting ErrFile to fd 2...
	I0930 20:57:26.702779   66420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:57:26.703065   66420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:57:26.703755   66420 out.go:352] Setting JSON to false
	I0930 20:57:26.704846   66420 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5990,"bootTime":1727723857,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 20:57:26.704935   66420 start.go:139] virtualization: kvm guest
	I0930 20:57:26.790659   66420 out.go:177] * [old-k8s-version-621406] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 20:57:26.969084   66420 notify.go:220] Checking for updates...
	I0930 20:57:27.052110   66420 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 20:57:27.168831   66420 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 20:57:27.266393   66420 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:57:27.392261   66420 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:57:27.492765   66420 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 20:57:27.614690   66420 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 20:57:27.688025   66420 config.go:182] Loaded profile config "bridge-207733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:57:27.688151   66420 config.go:182] Loaded profile config "enable-default-cni-207733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:57:27.688270   66420 config.go:182] Loaded profile config "flannel-207733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:57:27.688377   66420 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 20:57:27.808547   66420 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 20:57:27.912737   66420 start.go:297] selected driver: kvm2
	I0930 20:57:27.912777   66420 start.go:901] validating driver "kvm2" against <nil>
	I0930 20:57:27.912805   66420 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 20:57:27.913861   66420 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:57:27.913953   66420 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 20:57:27.930344   66420 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 20:57:27.930396   66420 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 20:57:27.930646   66420 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 20:57:27.930680   66420 cni.go:84] Creating CNI manager for ""
	I0930 20:57:27.930733   66420 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:57:27.930743   66420 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 20:57:27.930816   66420 start.go:340] cluster config:
	{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:57:27.930924   66420 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 20:57:28.058260   66420 out.go:177] * Starting "old-k8s-version-621406" primary control-plane node in "old-k8s-version-621406" cluster
	I0930 20:57:28.119191   66420 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 20:57:28.119306   66420 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 20:57:28.119326   66420 cache.go:56] Caching tarball of preloaded images
	I0930 20:57:28.119452   66420 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 20:57:28.119468   66420 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0930 20:57:28.119645   66420 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 20:57:28.119685   66420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json: {Name:mkefda673e53dc4755e39ca9fcbf671ea792e4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:57:28.119875   66420 start.go:360] acquireMachinesLock for old-k8s-version-621406: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 20:57:43.340330   66420 start.go:364] duration metric: took 15.220406313s to acquireMachinesLock for "old-k8s-version-621406"
	I0930 20:57:43.340415   66420 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 20:57:43.340547   66420 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 20:57:43.342378   66420 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 20:57:43.342596   66420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:57:43.342652   66420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:57:43.361063   66420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0930 20:57:43.361476   66420 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:57:43.361969   66420 main.go:141] libmachine: Using API Version  1
	I0930 20:57:43.361991   66420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:57:43.362362   66420 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:57:43.362589   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 20:57:43.362720   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 20:57:43.362891   66420 start.go:159] libmachine.API.Create for "old-k8s-version-621406" (driver="kvm2")
	I0930 20:57:43.362927   66420 client.go:168] LocalClient.Create starting
	I0930 20:57:43.362960   66420 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 20:57:43.363009   66420 main.go:141] libmachine: Decoding PEM data...
	I0930 20:57:43.363029   66420 main.go:141] libmachine: Parsing certificate...
	I0930 20:57:43.363128   66420 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 20:57:43.363159   66420 main.go:141] libmachine: Decoding PEM data...
	I0930 20:57:43.363176   66420 main.go:141] libmachine: Parsing certificate...
	I0930 20:57:43.363214   66420 main.go:141] libmachine: Running pre-create checks...
	I0930 20:57:43.363227   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .PreCreateCheck
	I0930 20:57:43.363673   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetConfigRaw
	I0930 20:57:43.364174   66420 main.go:141] libmachine: Creating machine...
	I0930 20:57:43.364188   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .Create
	I0930 20:57:43.364396   66420 main.go:141] libmachine: (old-k8s-version-621406) Creating KVM machine...
	I0930 20:57:43.365882   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found existing default KVM network
	I0930 20:57:43.367324   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:43.367141   66575 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:80:6e:5d} reservation:<nil>}
	I0930 20:57:43.368492   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:43.368396   66575 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:31:2d:47} reservation:<nil>}
	I0930 20:57:43.369550   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:43.369407   66575 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:d4:66:88} reservation:<nil>}
	I0930 20:57:43.370789   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:43.370622   66575 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003be5b0}
	I0930 20:57:43.370824   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | created network xml: 
	I0930 20:57:43.370859   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | <network>
	I0930 20:57:43.370893   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG |   <name>mk-old-k8s-version-621406</name>
	I0930 20:57:43.370907   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG |   <dns enable='no'/>
	I0930 20:57:43.370919   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG |   
	I0930 20:57:43.370929   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0930 20:57:43.370948   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG |     <dhcp>
	I0930 20:57:43.370961   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0930 20:57:43.370968   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG |     </dhcp>
	I0930 20:57:43.370986   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG |   </ip>
	I0930 20:57:43.370993   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG |   
	I0930 20:57:43.371000   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | </network>
	I0930 20:57:43.371007   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | 
	I0930 20:57:43.376742   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | trying to create private KVM network mk-old-k8s-version-621406 192.168.72.0/24...
	I0930 20:57:43.458160   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | private KVM network mk-old-k8s-version-621406 192.168.72.0/24 created
	I0930 20:57:43.458192   66420 main.go:141] libmachine: (old-k8s-version-621406) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406 ...
	I0930 20:57:43.458228   66420 main.go:141] libmachine: (old-k8s-version-621406) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 20:57:43.458284   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:43.458161   66575 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:57:43.458409   66420 main.go:141] libmachine: (old-k8s-version-621406) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 20:57:43.718768   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:43.718608   66575 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa...
	I0930 20:57:44.045114   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:44.044987   66575 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/old-k8s-version-621406.rawdisk...
	I0930 20:57:44.045150   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Writing magic tar header
	I0930 20:57:44.045187   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Writing SSH key tar header
	I0930 20:57:44.045234   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:44.045146   66575 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406 ...
	I0930 20:57:44.045269   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406
	I0930 20:57:44.045313   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 20:57:44.045333   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:57:44.045358   66420 main.go:141] libmachine: (old-k8s-version-621406) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406 (perms=drwx------)
	I0930 20:57:44.045375   66420 main.go:141] libmachine: (old-k8s-version-621406) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 20:57:44.045389   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 20:57:44.045402   66420 main.go:141] libmachine: (old-k8s-version-621406) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 20:57:44.045432   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 20:57:44.045448   66420 main.go:141] libmachine: (old-k8s-version-621406) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 20:57:44.045462   66420 main.go:141] libmachine: (old-k8s-version-621406) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 20:57:44.045475   66420 main.go:141] libmachine: (old-k8s-version-621406) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 20:57:44.045489   66420 main.go:141] libmachine: (old-k8s-version-621406) Creating domain...
	I0930 20:57:44.045514   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Checking permissions on dir: /home/jenkins
	I0930 20:57:44.045529   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Checking permissions on dir: /home
	I0930 20:57:44.045536   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Skipping /home - not owner
	I0930 20:57:44.046769   66420 main.go:141] libmachine: (old-k8s-version-621406) define libvirt domain using xml: 
	I0930 20:57:44.046787   66420 main.go:141] libmachine: (old-k8s-version-621406) <domain type='kvm'>
	I0930 20:57:44.046798   66420 main.go:141] libmachine: (old-k8s-version-621406)   <name>old-k8s-version-621406</name>
	I0930 20:57:44.046806   66420 main.go:141] libmachine: (old-k8s-version-621406)   <memory unit='MiB'>2200</memory>
	I0930 20:57:44.046817   66420 main.go:141] libmachine: (old-k8s-version-621406)   <vcpu>2</vcpu>
	I0930 20:57:44.046827   66420 main.go:141] libmachine: (old-k8s-version-621406)   <features>
	I0930 20:57:44.046839   66420 main.go:141] libmachine: (old-k8s-version-621406)     <acpi/>
	I0930 20:57:44.046859   66420 main.go:141] libmachine: (old-k8s-version-621406)     <apic/>
	I0930 20:57:44.046880   66420 main.go:141] libmachine: (old-k8s-version-621406)     <pae/>
	I0930 20:57:44.046890   66420 main.go:141] libmachine: (old-k8s-version-621406)     
	I0930 20:57:44.046898   66420 main.go:141] libmachine: (old-k8s-version-621406)   </features>
	I0930 20:57:44.046910   66420 main.go:141] libmachine: (old-k8s-version-621406)   <cpu mode='host-passthrough'>
	I0930 20:57:44.046920   66420 main.go:141] libmachine: (old-k8s-version-621406)   
	I0930 20:57:44.046926   66420 main.go:141] libmachine: (old-k8s-version-621406)   </cpu>
	I0930 20:57:44.046937   66420 main.go:141] libmachine: (old-k8s-version-621406)   <os>
	I0930 20:57:44.046944   66420 main.go:141] libmachine: (old-k8s-version-621406)     <type>hvm</type>
	I0930 20:57:44.046955   66420 main.go:141] libmachine: (old-k8s-version-621406)     <boot dev='cdrom'/>
	I0930 20:57:44.046971   66420 main.go:141] libmachine: (old-k8s-version-621406)     <boot dev='hd'/>
	I0930 20:57:44.046992   66420 main.go:141] libmachine: (old-k8s-version-621406)     <bootmenu enable='no'/>
	I0930 20:57:44.047001   66420 main.go:141] libmachine: (old-k8s-version-621406)   </os>
	I0930 20:57:44.047009   66420 main.go:141] libmachine: (old-k8s-version-621406)   <devices>
	I0930 20:57:44.047020   66420 main.go:141] libmachine: (old-k8s-version-621406)     <disk type='file' device='cdrom'>
	I0930 20:57:44.047034   66420 main.go:141] libmachine: (old-k8s-version-621406)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/boot2docker.iso'/>
	I0930 20:57:44.047045   66420 main.go:141] libmachine: (old-k8s-version-621406)       <target dev='hdc' bus='scsi'/>
	I0930 20:57:44.047053   66420 main.go:141] libmachine: (old-k8s-version-621406)       <readonly/>
	I0930 20:57:44.047065   66420 main.go:141] libmachine: (old-k8s-version-621406)     </disk>
	I0930 20:57:44.047081   66420 main.go:141] libmachine: (old-k8s-version-621406)     <disk type='file' device='disk'>
	I0930 20:57:44.047093   66420 main.go:141] libmachine: (old-k8s-version-621406)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 20:57:44.047110   66420 main.go:141] libmachine: (old-k8s-version-621406)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/old-k8s-version-621406.rawdisk'/>
	I0930 20:57:44.047120   66420 main.go:141] libmachine: (old-k8s-version-621406)       <target dev='hda' bus='virtio'/>
	I0930 20:57:44.047131   66420 main.go:141] libmachine: (old-k8s-version-621406)     </disk>
	I0930 20:57:44.047140   66420 main.go:141] libmachine: (old-k8s-version-621406)     <interface type='network'>
	I0930 20:57:44.047148   66420 main.go:141] libmachine: (old-k8s-version-621406)       <source network='mk-old-k8s-version-621406'/>
	I0930 20:57:44.047158   66420 main.go:141] libmachine: (old-k8s-version-621406)       <model type='virtio'/>
	I0930 20:57:44.047169   66420 main.go:141] libmachine: (old-k8s-version-621406)     </interface>
	I0930 20:57:44.047176   66420 main.go:141] libmachine: (old-k8s-version-621406)     <interface type='network'>
	I0930 20:57:44.047189   66420 main.go:141] libmachine: (old-k8s-version-621406)       <source network='default'/>
	I0930 20:57:44.047198   66420 main.go:141] libmachine: (old-k8s-version-621406)       <model type='virtio'/>
	I0930 20:57:44.047206   66420 main.go:141] libmachine: (old-k8s-version-621406)     </interface>
	I0930 20:57:44.047216   66420 main.go:141] libmachine: (old-k8s-version-621406)     <serial type='pty'>
	I0930 20:57:44.047225   66420 main.go:141] libmachine: (old-k8s-version-621406)       <target port='0'/>
	I0930 20:57:44.047235   66420 main.go:141] libmachine: (old-k8s-version-621406)     </serial>
	I0930 20:57:44.047243   66420 main.go:141] libmachine: (old-k8s-version-621406)     <console type='pty'>
	I0930 20:57:44.047253   66420 main.go:141] libmachine: (old-k8s-version-621406)       <target type='serial' port='0'/>
	I0930 20:57:44.047261   66420 main.go:141] libmachine: (old-k8s-version-621406)     </console>
	I0930 20:57:44.047270   66420 main.go:141] libmachine: (old-k8s-version-621406)     <rng model='virtio'>
	I0930 20:57:44.047280   66420 main.go:141] libmachine: (old-k8s-version-621406)       <backend model='random'>/dev/random</backend>
	I0930 20:57:44.047289   66420 main.go:141] libmachine: (old-k8s-version-621406)     </rng>
	I0930 20:57:44.047296   66420 main.go:141] libmachine: (old-k8s-version-621406)     
	I0930 20:57:44.047305   66420 main.go:141] libmachine: (old-k8s-version-621406)     
	I0930 20:57:44.047313   66420 main.go:141] libmachine: (old-k8s-version-621406)   </devices>
	I0930 20:57:44.047332   66420 main.go:141] libmachine: (old-k8s-version-621406) </domain>
	I0930 20:57:44.047342   66420 main.go:141] libmachine: (old-k8s-version-621406) 
	I0930 20:57:44.052629   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:e5:97:e1 in network default
	I0930 20:57:44.053339   66420 main.go:141] libmachine: (old-k8s-version-621406) Ensuring networks are active...
	I0930 20:57:44.053377   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:44.054258   66420 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network default is active
	I0930 20:57:44.054630   66420 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network mk-old-k8s-version-621406 is active
	I0930 20:57:44.055655   66420 main.go:141] libmachine: (old-k8s-version-621406) Getting domain xml...
	I0930 20:57:44.056641   66420 main.go:141] libmachine: (old-k8s-version-621406) Creating domain...
	I0930 20:57:45.483448   66420 main.go:141] libmachine: (old-k8s-version-621406) Waiting to get IP...
	I0930 20:57:45.484584   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:45.485184   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:57:45.485241   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:45.485166   66575 retry.go:31] will retry after 203.97227ms: waiting for machine to come up
	I0930 20:57:45.690983   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:45.691632   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:57:45.691661   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:45.691516   66575 retry.go:31] will retry after 266.469522ms: waiting for machine to come up
	I0930 20:57:45.960179   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:45.960831   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:57:45.960866   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:45.960766   66575 retry.go:31] will retry after 329.435504ms: waiting for machine to come up
	I0930 20:57:46.292430   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:46.292999   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:57:46.293025   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:46.292951   66575 retry.go:31] will retry after 562.767336ms: waiting for machine to come up
	I0930 20:57:46.858006   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:46.858601   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:57:46.858629   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:46.858551   66575 retry.go:31] will retry after 665.824841ms: waiting for machine to come up
	I0930 20:57:47.526231   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:47.526793   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:57:47.526837   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:47.526751   66575 retry.go:31] will retry after 687.810585ms: waiting for machine to come up
	I0930 20:57:48.216864   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:48.217574   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:57:48.217602   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:48.217545   66575 retry.go:31] will retry after 1.035330754s: waiting for machine to come up
	I0930 20:57:49.254644   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:49.255181   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:57:49.255209   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:49.255108   66575 retry.go:31] will retry after 1.475943592s: waiting for machine to come up
	I0930 20:57:50.732914   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:50.733481   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:57:50.733503   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:50.733442   66575 retry.go:31] will retry after 1.576798933s: waiting for machine to come up
	I0930 20:57:52.311873   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:52.312331   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:57:52.312374   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:52.312297   66575 retry.go:31] will retry after 1.59364027s: waiting for machine to come up
	I0930 20:57:53.907419   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:53.907998   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:57:53.908082   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:53.907981   66575 retry.go:31] will retry after 2.806107376s: waiting for machine to come up
	I0930 20:57:56.718015   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:56.718723   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:57:56.718746   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:56.718673   66575 retry.go:31] will retry after 2.926822251s: waiting for machine to come up
	I0930 20:57:59.647522   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:57:59.648083   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:57:59.648115   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:57:59.648026   66575 retry.go:31] will retry after 2.876723909s: waiting for machine to come up
	I0930 20:58:02.525772   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:02.526246   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 20:58:02.526268   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 20:58:02.526228   66575 retry.go:31] will retry after 4.893150562s: waiting for machine to come up
	I0930 20:58:07.421615   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:07.422353   66420 main.go:141] libmachine: (old-k8s-version-621406) Found IP for machine: 192.168.72.159
	I0930 20:58:07.422374   66420 main.go:141] libmachine: (old-k8s-version-621406) Reserving static IP address...
	I0930 20:58:07.422425   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has current primary IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:07.422845   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"} in network mk-old-k8s-version-621406
	I0930 20:58:07.510571   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Getting to WaitForSSH function...
	I0930 20:58:07.510603   66420 main.go:141] libmachine: (old-k8s-version-621406) Reserved static IP address: 192.168.72.159
	I0930 20:58:07.510619   66420 main.go:141] libmachine: (old-k8s-version-621406) Waiting for SSH to be available...
	I0930 20:58:07.513356   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:07.513828   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:07.513859   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:07.514010   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH client type: external
	I0930 20:58:07.514055   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa (-rw-------)
	I0930 20:58:07.514102   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 20:58:07.514129   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | About to run SSH command:
	I0930 20:58:07.514146   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | exit 0
	I0930 20:58:07.639850   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | SSH cmd err, output: <nil>: 
	I0930 20:58:07.640119   66420 main.go:141] libmachine: (old-k8s-version-621406) KVM machine creation complete!
	I0930 20:58:07.640505   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetConfigRaw
	I0930 20:58:07.641102   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 20:58:07.641293   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 20:58:07.641492   66420 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 20:58:07.641509   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetState
	I0930 20:58:07.643015   66420 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 20:58:07.643038   66420 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 20:58:07.643048   66420 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 20:58:07.643056   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 20:58:07.646417   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:07.647026   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:07.647059   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:07.647247   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 20:58:07.647451   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:07.647635   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:07.647845   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 20:58:07.648093   66420 main.go:141] libmachine: Using SSH client type: native
	I0930 20:58:07.648358   66420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 20:58:07.648376   66420 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 20:58:07.755001   66420 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:58:07.755024   66420 main.go:141] libmachine: Detecting the provisioner...
	I0930 20:58:07.755033   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 20:58:07.757677   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:07.757993   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:07.758017   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:07.758184   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 20:58:07.758390   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:07.758536   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:07.758744   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 20:58:07.758900   66420 main.go:141] libmachine: Using SSH client type: native
	I0930 20:58:07.759083   66420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 20:58:07.759096   66420 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 20:58:07.864050   66420 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 20:58:07.864149   66420 main.go:141] libmachine: found compatible host: buildroot
	I0930 20:58:07.864160   66420 main.go:141] libmachine: Provisioning with buildroot...
	I0930 20:58:07.864171   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 20:58:07.864385   66420 buildroot.go:166] provisioning hostname "old-k8s-version-621406"
	I0930 20:58:07.864407   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 20:58:07.864583   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 20:58:07.867390   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:07.867846   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:07.867874   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:07.868024   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 20:58:07.868168   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:07.868296   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:07.868433   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 20:58:07.868580   66420 main.go:141] libmachine: Using SSH client type: native
	I0930 20:58:07.868795   66420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 20:58:07.868809   66420 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-621406 && echo "old-k8s-version-621406" | sudo tee /etc/hostname
	I0930 20:58:07.988806   66420 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-621406
	
	I0930 20:58:07.988855   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 20:58:07.992592   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:07.993009   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:07.993040   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:07.993254   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 20:58:07.993474   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:07.993731   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:07.993906   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 20:58:07.994100   66420 main.go:141] libmachine: Using SSH client type: native
	I0930 20:58:07.994324   66420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 20:58:07.994343   66420 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-621406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-621406/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-621406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 20:58:08.110408   66420 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 20:58:08.110485   66420 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 20:58:08.110548   66420 buildroot.go:174] setting up certificates
	I0930 20:58:08.110562   66420 provision.go:84] configureAuth start
	I0930 20:58:08.110576   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 20:58:08.110866   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 20:58:08.114309   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.114651   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:08.114682   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.114827   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 20:58:08.117611   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.118002   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:08.118034   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.118188   66420 provision.go:143] copyHostCerts
	I0930 20:58:08.118271   66420 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 20:58:08.118289   66420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 20:58:08.118349   66420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 20:58:08.118462   66420 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 20:58:08.118474   66420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 20:58:08.118502   66420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 20:58:08.118568   66420 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 20:58:08.118578   66420 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 20:58:08.118604   66420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 20:58:08.118662   66420 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-621406 san=[127.0.0.1 192.168.72.159 localhost minikube old-k8s-version-621406]
	I0930 20:58:08.362958   66420 provision.go:177] copyRemoteCerts
	I0930 20:58:08.363025   66420 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 20:58:08.363055   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 20:58:08.365949   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.366399   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:08.366424   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.366640   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 20:58:08.366924   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:08.367083   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 20:58:08.367261   66420 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 20:58:08.450822   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 20:58:08.476380   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0930 20:58:08.500921   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 20:58:08.526008   66420 provision.go:87] duration metric: took 415.432816ms to configureAuth
	I0930 20:58:08.526037   66420 buildroot.go:189] setting minikube options for container-runtime
	I0930 20:58:08.526274   66420 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 20:58:08.526360   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 20:58:08.528670   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.529079   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:08.529139   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.529297   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 20:58:08.529609   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:08.529817   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:08.529971   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 20:58:08.530158   66420 main.go:141] libmachine: Using SSH client type: native
	I0930 20:58:08.530334   66420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 20:58:08.530350   66420 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 20:58:08.776647   66420 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 20:58:08.776677   66420 main.go:141] libmachine: Checking connection to Docker...
	I0930 20:58:08.776695   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetURL
	I0930 20:58:08.778450   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using libvirt version 6000000
	I0930 20:58:08.782896   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.783432   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:08.783460   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.783953   66420 main.go:141] libmachine: Docker is up and running!
	I0930 20:58:08.783982   66420 main.go:141] libmachine: Reticulating splines...
	I0930 20:58:08.783989   66420 client.go:171] duration metric: took 25.421055427s to LocalClient.Create
	I0930 20:58:08.784019   66420 start.go:167] duration metric: took 25.421131745s to libmachine.API.Create "old-k8s-version-621406"
	I0930 20:58:08.784031   66420 start.go:293] postStartSetup for "old-k8s-version-621406" (driver="kvm2")
	I0930 20:58:08.784057   66420 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 20:58:08.784073   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 20:58:08.784417   66420 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 20:58:08.784450   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 20:58:08.787432   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.787796   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:08.787827   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.788012   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 20:58:08.788188   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:08.788306   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 20:58:08.788433   66420 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 20:58:08.877054   66420 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 20:58:08.881567   66420 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 20:58:08.881640   66420 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 20:58:08.881717   66420 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 20:58:08.881816   66420 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 20:58:08.881972   66420 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 20:58:08.892916   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:58:08.921546   66420 start.go:296] duration metric: took 137.493743ms for postStartSetup
	I0930 20:58:08.921610   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetConfigRaw
	I0930 20:58:08.922337   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 20:58:08.925608   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.926042   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:08.926067   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.926391   66420 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 20:58:08.926650   66420 start.go:128] duration metric: took 25.586089848s to createHost
	I0930 20:58:08.926688   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 20:58:08.929946   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.930479   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:08.930509   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:08.930687   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 20:58:08.930876   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:08.931052   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:08.931242   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 20:58:08.931451   66420 main.go:141] libmachine: Using SSH client type: native
	I0930 20:58:08.931706   66420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 20:58:08.931725   66420 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 20:58:09.044526   66420 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727729889.026146014
	
	I0930 20:58:09.044551   66420 fix.go:216] guest clock: 1727729889.026146014
	I0930 20:58:09.044558   66420 fix.go:229] Guest: 2024-09-30 20:58:09.026146014 +0000 UTC Remote: 2024-09-30 20:58:08.92667165 +0000 UTC m=+42.270029807 (delta=99.474364ms)
	I0930 20:58:09.044605   66420 fix.go:200] guest clock delta is within tolerance: 99.474364ms
	I0930 20:58:09.044610   66420 start.go:83] releasing machines lock for "old-k8s-version-621406", held for 25.704229925s
	I0930 20:58:09.044638   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 20:58:09.044936   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 20:58:09.048257   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:09.048731   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:09.048760   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:09.048920   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 20:58:09.049503   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 20:58:09.049678   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 20:58:09.049774   66420 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 20:58:09.049814   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 20:58:09.051741   66420 ssh_runner.go:195] Run: cat /version.json
	I0930 20:58:09.051769   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 20:58:09.053072   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:09.053684   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:09.053710   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:09.053962   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 20:58:09.054148   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:09.054357   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 20:58:09.054956   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:09.054994   66420 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 20:58:09.055387   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:09.055415   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:09.055826   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 20:58:09.056049   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 20:58:09.056216   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 20:58:09.056430   66420 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 20:58:09.137807   66420 ssh_runner.go:195] Run: systemctl --version
	I0930 20:58:09.174542   66420 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 20:58:09.342868   66420 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 20:58:09.348468   66420 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 20:58:09.348540   66420 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 20:58:09.364030   66420 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 20:58:09.364060   66420 start.go:495] detecting cgroup driver to use...
	I0930 20:58:09.364151   66420 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 20:58:09.387343   66420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 20:58:09.402443   66420 docker.go:217] disabling cri-docker service (if available) ...
	I0930 20:58:09.402512   66420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 20:58:09.417320   66420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 20:58:09.432681   66420 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 20:58:09.556310   66420 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 20:58:09.727112   66420 docker.go:233] disabling docker service ...
	I0930 20:58:09.727190   66420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 20:58:09.743668   66420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 20:58:09.757903   66420 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 20:58:09.933991   66420 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 20:58:10.089067   66420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 20:58:10.103591   66420 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 20:58:10.122559   66420 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0930 20:58:10.122633   66420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:58:10.134017   66420 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 20:58:10.134088   66420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:58:10.145090   66420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:58:10.155743   66420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 20:58:10.167594   66420 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 20:58:10.178201   66420 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 20:58:10.190985   66420 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 20:58:10.191061   66420 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 20:58:10.206959   66420 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 20:58:10.219243   66420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:58:10.359070   66420 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 20:58:10.456866   66420 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 20:58:10.456961   66420 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 20:58:10.463235   66420 start.go:563] Will wait 60s for crictl version
	I0930 20:58:10.463306   66420 ssh_runner.go:195] Run: which crictl
	I0930 20:58:10.468117   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 20:58:10.512999   66420 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 20:58:10.513097   66420 ssh_runner.go:195] Run: crio --version
	I0930 20:58:10.549965   66420 ssh_runner.go:195] Run: crio --version
	I0930 20:58:10.583757   66420 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0930 20:58:10.584938   66420 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 20:58:10.589386   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:10.589854   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 21:57:59 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 20:58:10.589882   66420 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 20:58:10.590138   66420 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0930 20:58:10.595609   66420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:58:10.609649   66420 kubeadm.go:883] updating cluster {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 20:58:10.609748   66420 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 20:58:10.609787   66420 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:58:10.647817   66420 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 20:58:10.647895   66420 ssh_runner.go:195] Run: which lz4
	I0930 20:58:10.652513   66420 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 20:58:10.657860   66420 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 20:58:10.657895   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0930 20:58:12.170672   66420 crio.go:462] duration metric: took 1.518199346s to copy over tarball
	I0930 20:58:12.170749   66420 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 20:58:15.147494   66420 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.976715365s)
	I0930 20:58:15.147545   66420 crio.go:469] duration metric: took 2.976823998s to extract the tarball
	I0930 20:58:15.147555   66420 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 20:58:15.196288   66420 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 20:58:15.264942   66420 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 20:58:15.264968   66420 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 20:58:15.265066   66420 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:58:15.265097   66420 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:58:15.265107   66420 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:58:15.265073   66420 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:58:15.265178   66420 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0930 20:58:15.265384   66420 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0930 20:58:15.265393   66420 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:58:15.265513   66420 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0930 20:58:15.268274   66420 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:58:15.268319   66420 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:58:15.268333   66420 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:58:15.268351   66420 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0930 20:58:15.268422   66420 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0930 20:58:15.268290   66420 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:58:15.268509   66420 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:58:15.268622   66420 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0930 20:58:15.483219   66420 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0930 20:58:15.539628   66420 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0930 20:58:15.539681   66420 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0930 20:58:15.539726   66420 ssh_runner.go:195] Run: which crictl
	I0930 20:58:15.544143   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 20:58:15.586050   66420 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0930 20:58:15.586419   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 20:58:15.610405   66420 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:58:15.614417   66420 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0930 20:58:15.625019   66420 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:58:15.642591   66420 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:58:15.646728   66420 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0930 20:58:15.646770   66420 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0930 20:58:15.646816   66420 ssh_runner.go:195] Run: which crictl
	I0930 20:58:15.646838   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 20:58:15.648301   66420 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:58:15.810259   66420 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0930 20:58:15.810309   66420 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:58:15.810343   66420 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0930 20:58:15.810358   66420 ssh_runner.go:195] Run: which crictl
	I0930 20:58:15.810377   66420 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0930 20:58:15.810417   66420 ssh_runner.go:195] Run: which crictl
	I0930 20:58:15.810416   66420 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0930 20:58:15.810464   66420 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:58:15.810486   66420 ssh_runner.go:195] Run: which crictl
	I0930 20:58:15.816926   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 20:58:15.816961   66420 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0930 20:58:15.816996   66420 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:58:15.817038   66420 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0930 20:58:15.817044   66420 ssh_runner.go:195] Run: which crictl
	I0930 20:58:15.827740   66420 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0930 20:58:15.827786   66420 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:58:15.827833   66420 ssh_runner.go:195] Run: which crictl
	I0930 20:58:15.827876   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:58:15.827922   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:58:15.827965   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 20:58:15.941200   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 20:58:15.941303   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:58:15.955856   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:58:15.955981   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 20:58:15.956082   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:58:15.956154   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:58:16.073116   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 20:58:16.073290   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:58:16.073353   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 20:58:16.110479   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 20:58:16.110509   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 20:58:16.110522   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:58:16.185104   66420 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0930 20:58:16.196557   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 20:58:16.209311   66420 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0930 20:58:16.267226   66420 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0930 20:58:16.267556   66420 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 20:58:16.267697   66420 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0930 20:58:16.270547   66420 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0930 20:58:16.303516   66420 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0930 20:58:16.570138   66420 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 20:58:16.716992   66420 cache_images.go:92] duration metric: took 1.452003888s to LoadCachedImages
	W0930 20:58:16.717075   66420 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0930 20:58:16.717091   66420 kubeadm.go:934] updating node { 192.168.72.159 8443 v1.20.0 crio true true} ...
	I0930 20:58:16.717209   66420 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-621406 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 20:58:16.717268   66420 ssh_runner.go:195] Run: crio config
	I0930 20:58:16.767804   66420 cni.go:84] Creating CNI manager for ""
	I0930 20:58:16.767834   66420 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 20:58:16.767846   66420 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 20:58:16.767880   66420 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-621406 NodeName:old-k8s-version-621406 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0930 20:58:16.768053   66420 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-621406"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 20:58:16.768115   66420 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0930 20:58:16.779597   66420 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 20:58:16.779682   66420 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 20:58:16.789828   66420 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0930 20:58:16.807616   66420 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 20:58:16.826825   66420 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0930 20:58:16.847884   66420 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0930 20:58:16.853446   66420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 20:58:16.870014   66420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 20:58:16.989266   66420 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 20:58:17.010475   66420 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406 for IP: 192.168.72.159
	I0930 20:58:17.010504   66420 certs.go:194] generating shared ca certs ...
	I0930 20:58:17.010589   66420 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:58:17.010775   66420 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 20:58:17.010830   66420 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 20:58:17.010843   66420 certs.go:256] generating profile certs ...
	I0930 20:58:17.010917   66420 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/client.key
	I0930 20:58:17.010942   66420 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/client.crt with IP's: []
	I0930 20:58:17.143694   66420 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/client.crt ...
	I0930 20:58:17.143724   66420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/client.crt: {Name:mkc77ea5bd4756b34b73dd68a8dc9e78588c2836 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:58:17.143902   66420 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/client.key ...
	I0930 20:58:17.143915   66420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/client.key: {Name:mkfec68bb1c1808b87ca73e53bb605480d4a7b3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:58:17.208122   66420 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key.f3dc5056
	I0930 20:58:17.208156   66420 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.crt.f3dc5056 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.159]
	I0930 20:58:17.554762   66420 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.crt.f3dc5056 ...
	I0930 20:58:17.554795   66420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.crt.f3dc5056: {Name:mk91ff095fa06cc1eceb00f6086d006f4b87cc67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:58:17.593990   66420 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key.f3dc5056 ...
	I0930 20:58:17.594053   66420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key.f3dc5056: {Name:mk654d2266336a2fb513d607357eda13f25c6339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:58:17.594229   66420 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.crt.f3dc5056 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.crt
	I0930 20:58:17.594343   66420 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key.f3dc5056 -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key
	I0930 20:58:17.594431   66420 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key
	I0930 20:58:17.594451   66420 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.crt with IP's: []
	I0930 20:58:17.730914   66420 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.crt ...
	I0930 20:58:17.730949   66420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.crt: {Name:mk034261532c8a8a20caa66e27220dba824fb7f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:58:17.731130   66420 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key ...
	I0930 20:58:17.731150   66420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key: {Name:mk17a47301eb1fef57e73d92860796b82e43170e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 20:58:17.731408   66420 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 20:58:17.731458   66420 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 20:58:17.731474   66420 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 20:58:17.731510   66420 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 20:58:17.731576   66420 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 20:58:17.731615   66420 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 20:58:17.731673   66420 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 20:58:17.732447   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 20:58:17.759742   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 20:58:17.785203   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 20:58:17.811787   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 20:58:17.866589   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0930 20:58:17.905508   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 20:58:17.946039   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 20:58:17.976099   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 20:58:18.011391   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 20:58:18.038116   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 20:58:18.064205   66420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 20:58:18.091901   66420 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 20:58:18.111986   66420 ssh_runner.go:195] Run: openssl version
	I0930 20:58:18.118477   66420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 20:58:18.131485   66420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:58:18.136697   66420 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:58:18.136778   66420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 20:58:18.144680   66420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 20:58:18.156615   66420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 20:58:18.168736   66420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 20:58:18.173666   66420 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 20:58:18.173735   66420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 20:58:18.180910   66420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 20:58:18.192795   66420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 20:58:18.204133   66420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 20:58:18.208727   66420 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 20:58:18.208786   66420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 20:58:18.215797   66420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 20:58:18.227597   66420 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 20:58:18.232272   66420 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 20:58:18.232324   66420 kubeadm.go:392] StartCluster: {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 20:58:18.232398   66420 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 20:58:18.232449   66420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 20:58:18.277967   66420 cri.go:89] found id: ""
	I0930 20:58:18.278040   66420 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 20:58:18.291517   66420 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 20:58:18.305129   66420 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 20:58:18.323725   66420 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 20:58:18.323743   66420 kubeadm.go:157] found existing configuration files:
	
	I0930 20:58:18.323785   66420 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 20:58:18.338911   66420 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 20:58:18.338975   66420 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 20:58:18.350271   66420 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 20:58:18.362134   66420 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 20:58:18.362191   66420 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 20:58:18.376001   66420 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 20:58:18.386970   66420 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 20:58:18.387032   66420 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 20:58:18.398345   66420 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 20:58:18.409800   66420 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 20:58:18.409862   66420 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 20:58:18.423628   66420 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 20:58:18.758014   66420 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:00:17.144202   66420 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:00:17.144292   66420 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0930 21:00:17.145627   66420 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:00:17.145681   66420 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:00:17.145756   66420 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:00:17.145853   66420 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:00:17.145986   66420 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:00:17.146107   66420 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:00:17.147914   66420 out.go:235]   - Generating certificates and keys ...
	I0930 21:00:17.147992   66420 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:00:17.148045   66420 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:00:17.148121   66420 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 21:00:17.148203   66420 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 21:00:17.148301   66420 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 21:00:17.148390   66420 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 21:00:17.148447   66420 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 21:00:17.148568   66420 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-621406] and IPs [192.168.72.159 127.0.0.1 ::1]
	I0930 21:00:17.148620   66420 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 21:00:17.148731   66420 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-621406] and IPs [192.168.72.159 127.0.0.1 ::1]
	I0930 21:00:17.148791   66420 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 21:00:17.148849   66420 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 21:00:17.148887   66420 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 21:00:17.148945   66420 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:00:17.148996   66420 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:00:17.149039   66420 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:00:17.149088   66420 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:00:17.149148   66420 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:00:17.149228   66420 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:00:17.149315   66420 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:00:17.149347   66420 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:00:17.149407   66420 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:00:17.150856   66420 out.go:235]   - Booting up control plane ...
	I0930 21:00:17.150957   66420 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:00:17.151019   66420 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:00:17.151079   66420 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:00:17.151154   66420 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:00:17.151312   66420 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:00:17.151377   66420 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:00:17.151445   66420 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:00:17.151646   66420 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:00:17.151709   66420 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:00:17.151869   66420 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:00:17.151928   66420 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:00:17.152112   66420 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:00:17.152176   66420 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:00:17.152348   66420 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:00:17.152414   66420 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:00:17.152577   66420 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:00:17.152584   66420 kubeadm.go:310] 
	I0930 21:00:17.152630   66420 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:00:17.152664   66420 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:00:17.152670   66420 kubeadm.go:310] 
	I0930 21:00:17.152699   66420 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:00:17.152728   66420 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:00:17.152839   66420 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:00:17.152858   66420 kubeadm.go:310] 
	I0930 21:00:17.152946   66420 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:00:17.152977   66420 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:00:17.153004   66420 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:00:17.153010   66420 kubeadm.go:310] 
	I0930 21:00:17.153130   66420 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:00:17.153218   66420 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:00:17.153228   66420 kubeadm.go:310] 
	I0930 21:00:17.153380   66420 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:00:17.153505   66420 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:00:17.153610   66420 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:00:17.153714   66420 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:00:17.153808   66420 kubeadm.go:310] 
	W0930 21:00:17.153896   66420 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-621406] and IPs [192.168.72.159 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-621406] and IPs [192.168.72.159 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-621406] and IPs [192.168.72.159 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-621406] and IPs [192.168.72.159 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0930 21:00:17.153944   66420 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:00:18.542236   66420 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.388264556s)
	I0930 21:00:18.542328   66420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:00:18.556045   66420 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:00:18.565496   66420 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:00:18.565518   66420 kubeadm.go:157] found existing configuration files:
	
	I0930 21:00:18.565561   66420 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:00:18.575108   66420 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:00:18.575184   66420 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:00:18.584775   66420 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:00:18.594308   66420 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:00:18.594362   66420 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:00:18.603747   66420 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:00:18.613746   66420 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:00:18.613801   66420 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:00:18.623722   66420 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:00:18.632727   66420 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:00:18.632786   66420 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:00:18.642150   66420 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:00:18.843099   66420 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:02:15.333540   66420 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:02:15.333646   66420 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0930 21:02:15.335199   66420 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:02:15.335272   66420 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:02:15.335390   66420 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:02:15.335475   66420 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:02:15.335608   66420 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:02:15.335703   66420 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:02:15.337729   66420 out.go:235]   - Generating certificates and keys ...
	I0930 21:02:15.337844   66420 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:02:15.337938   66420 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:02:15.338047   66420 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:02:15.338138   66420 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:02:15.338233   66420 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:02:15.338316   66420 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:02:15.338382   66420 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:02:15.338464   66420 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:02:15.338570   66420 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:02:15.338674   66420 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:02:15.338731   66420 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:02:15.338808   66420 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:02:15.338888   66420 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:02:15.338959   66420 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:02:15.339018   66420 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:02:15.339070   66420 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:02:15.339164   66420 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:02:15.339263   66420 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:02:15.339328   66420 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:02:15.339420   66420 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:02:15.340749   66420 out.go:235]   - Booting up control plane ...
	I0930 21:02:15.340845   66420 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:02:15.340922   66420 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:02:15.340982   66420 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:02:15.341054   66420 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:02:15.341189   66420 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:02:15.341234   66420 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:02:15.341292   66420 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:02:15.341497   66420 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:02:15.341557   66420 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:02:15.341717   66420 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:02:15.341788   66420 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:02:15.341992   66420 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:02:15.342082   66420 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:02:15.342266   66420 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:02:15.342363   66420 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:02:15.342551   66420 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:02:15.342562   66420 kubeadm.go:310] 
	I0930 21:02:15.342606   66420 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:02:15.342645   66420 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:02:15.342651   66420 kubeadm.go:310] 
	I0930 21:02:15.342679   66420 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:02:15.342719   66420 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:02:15.342823   66420 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:02:15.342833   66420 kubeadm.go:310] 
	I0930 21:02:15.342920   66420 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:02:15.342958   66420 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:02:15.342984   66420 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:02:15.342990   66420 kubeadm.go:310] 
	I0930 21:02:15.343084   66420 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:02:15.343157   66420 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:02:15.343163   66420 kubeadm.go:310] 
	I0930 21:02:15.343271   66420 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:02:15.343390   66420 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:02:15.343494   66420 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:02:15.343567   66420 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:02:15.343629   66420 kubeadm.go:394] duration metric: took 3m57.11130896s to StartCluster
	I0930 21:02:15.343642   66420 kubeadm.go:310] 
	I0930 21:02:15.343695   66420 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:02:15.343748   66420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:02:15.384009   66420 cri.go:89] found id: ""
	I0930 21:02:15.384136   66420 logs.go:276] 0 containers: []
	W0930 21:02:15.384159   66420 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:02:15.384168   66420 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:02:15.384243   66420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:02:15.421729   66420 cri.go:89] found id: ""
	I0930 21:02:15.421765   66420 logs.go:276] 0 containers: []
	W0930 21:02:15.421777   66420 logs.go:278] No container was found matching "etcd"
	I0930 21:02:15.421785   66420 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:02:15.421848   66420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:02:15.455697   66420 cri.go:89] found id: ""
	I0930 21:02:15.455726   66420 logs.go:276] 0 containers: []
	W0930 21:02:15.455734   66420 logs.go:278] No container was found matching "coredns"
	I0930 21:02:15.455740   66420 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:02:15.455791   66420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:02:15.489166   66420 cri.go:89] found id: ""
	I0930 21:02:15.489193   66420 logs.go:276] 0 containers: []
	W0930 21:02:15.489201   66420 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:02:15.489207   66420 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:02:15.489254   66420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:02:15.527551   66420 cri.go:89] found id: ""
	I0930 21:02:15.527582   66420 logs.go:276] 0 containers: []
	W0930 21:02:15.527593   66420 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:02:15.527601   66420 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:02:15.527660   66420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:02:15.564241   66420 cri.go:89] found id: ""
	I0930 21:02:15.564267   66420 logs.go:276] 0 containers: []
	W0930 21:02:15.564276   66420 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:02:15.564282   66420 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:02:15.564331   66420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:02:15.599361   66420 cri.go:89] found id: ""
	I0930 21:02:15.599390   66420 logs.go:276] 0 containers: []
	W0930 21:02:15.599403   66420 logs.go:278] No container was found matching "kindnet"
	I0930 21:02:15.599421   66420 logs.go:123] Gathering logs for kubelet ...
	I0930 21:02:15.599435   66420 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:02:15.650918   66420 logs.go:123] Gathering logs for dmesg ...
	I0930 21:02:15.650959   66420 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:02:15.664516   66420 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:02:15.664544   66420 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:02:15.791127   66420 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:02:15.791146   66420 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:02:15.791159   66420 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:02:15.890774   66420 logs.go:123] Gathering logs for container status ...
	I0930 21:02:15.890813   66420 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0930 21:02:15.958678   66420 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0930 21:02:15.958747   66420 out.go:270] * 
	* 
	W0930 21:02:15.958813   66420 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:02:15.958832   66420 out.go:270] * 
	* 
	W0930 21:02:15.959936   66420 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 21:02:15.963494   66420 out.go:201] 
	W0930 21:02:15.964828   66420 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:02:15.964883   66420 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0930 21:02:15.964906   66420 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0930 21:02:15.966410   66420 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-621406 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406: exit status 6 (218.832626ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 21:02:16.235693   72953 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-621406" does not appear in /home/jenkins/minikube-integration/19736-7672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-621406" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (289.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-256103 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-256103 --alsologtostderr -v=3: exit status 82 (2m0.54681748s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-256103"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 21:00:03.088743   72195 out.go:345] Setting OutFile to fd 1 ...
	I0930 21:00:03.088904   72195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:00:03.088914   72195 out.go:358] Setting ErrFile to fd 2...
	I0930 21:00:03.088919   72195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:00:03.089194   72195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 21:00:03.089448   72195 out.go:352] Setting JSON to false
	I0930 21:00:03.089554   72195 mustload.go:65] Loading cluster: embed-certs-256103
	I0930 21:00:03.090046   72195 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:00:03.090117   72195 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/config.json ...
	I0930 21:00:03.090293   72195 mustload.go:65] Loading cluster: embed-certs-256103
	I0930 21:00:03.090407   72195 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:00:03.090443   72195 stop.go:39] StopHost: embed-certs-256103
	I0930 21:00:03.090807   72195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:00:03.090849   72195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:00:03.106461   72195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0930 21:00:03.107114   72195 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:00:03.107808   72195 main.go:141] libmachine: Using API Version  1
	I0930 21:00:03.107833   72195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:00:03.108285   72195 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:00:03.111544   72195 out.go:177] * Stopping node "embed-certs-256103"  ...
	I0930 21:00:03.112950   72195 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 21:00:03.112986   72195 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:00:03.113276   72195 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 21:00:03.113309   72195 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:00:03.116663   72195 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:00:03.117039   72195 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 21:59:04 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:00:03.117075   72195 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:00:03.117267   72195 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:00:03.117475   72195 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:00:03.117653   72195 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:00:03.117806   72195 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:00:03.243954   72195 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0930 21:00:03.308512   72195 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0930 21:00:03.364450   72195 main.go:141] libmachine: Stopping "embed-certs-256103"...
	I0930 21:00:03.364493   72195 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:00:03.366350   72195 main.go:141] libmachine: (embed-certs-256103) Calling .Stop
	I0930 21:00:03.370828   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 0/120
	I0930 21:00:04.372320   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 1/120
	I0930 21:00:05.374104   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 2/120
	I0930 21:00:06.375750   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 3/120
	I0930 21:00:07.377034   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 4/120
	I0930 21:00:08.379241   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 5/120
	I0930 21:00:09.381189   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 6/120
	I0930 21:00:10.382648   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 7/120
	I0930 21:00:11.385338   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 8/120
	I0930 21:00:12.386997   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 9/120
	I0930 21:00:13.388544   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 10/120
	I0930 21:00:14.390154   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 11/120
	I0930 21:00:15.391883   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 12/120
	I0930 21:00:16.393369   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 13/120
	I0930 21:00:17.394951   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 14/120
	I0930 21:00:18.397195   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 15/120
	I0930 21:00:19.398638   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 16/120
	I0930 21:00:20.400174   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 17/120
	I0930 21:00:21.401509   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 18/120
	I0930 21:00:22.403112   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 19/120
	I0930 21:00:23.404814   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 20/120
	I0930 21:00:24.406664   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 21/120
	I0930 21:00:25.408262   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 22/120
	I0930 21:00:26.410355   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 23/120
	I0930 21:00:27.412160   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 24/120
	I0930 21:00:28.414386   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 25/120
	I0930 21:00:29.416082   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 26/120
	I0930 21:00:30.417604   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 27/120
	I0930 21:00:31.418949   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 28/120
	I0930 21:00:32.420580   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 29/120
	I0930 21:00:33.423231   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 30/120
	I0930 21:00:34.424701   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 31/120
	I0930 21:00:35.426262   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 32/120
	I0930 21:00:36.427783   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 33/120
	I0930 21:00:37.429320   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 34/120
	I0930 21:00:38.431601   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 35/120
	I0930 21:00:39.433286   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 36/120
	I0930 21:00:40.435276   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 37/120
	I0930 21:00:41.436960   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 38/120
	I0930 21:00:42.438761   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 39/120
	I0930 21:00:43.441252   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 40/120
	I0930 21:00:44.442747   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 41/120
	I0930 21:00:45.444532   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 42/120
	I0930 21:00:46.446152   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 43/120
	I0930 21:00:47.447571   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 44/120
	I0930 21:00:48.449870   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 45/120
	I0930 21:00:49.451275   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 46/120
	I0930 21:00:50.453151   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 47/120
	I0930 21:00:51.455068   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 48/120
	I0930 21:00:52.456451   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 49/120
	I0930 21:00:53.458332   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 50/120
	I0930 21:00:54.459808   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 51/120
	I0930 21:00:55.461472   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 52/120
	I0930 21:00:56.462876   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 53/120
	I0930 21:00:57.464334   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 54/120
	I0930 21:00:58.466169   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 55/120
	I0930 21:00:59.468031   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 56/120
	I0930 21:01:00.469960   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 57/120
	I0930 21:01:01.472214   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 58/120
	I0930 21:01:02.473873   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 59/120
	I0930 21:01:03.476308   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 60/120
	I0930 21:01:04.477994   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 61/120
	I0930 21:01:05.479463   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 62/120
	I0930 21:01:06.480911   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 63/120
	I0930 21:01:07.482637   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 64/120
	I0930 21:01:08.484668   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 65/120
	I0930 21:01:09.486294   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 66/120
	I0930 21:01:10.487824   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 67/120
	I0930 21:01:11.489491   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 68/120
	I0930 21:01:12.491065   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 69/120
	I0930 21:01:13.492760   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 70/120
	I0930 21:01:14.494137   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 71/120
	I0930 21:01:15.496173   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 72/120
	I0930 21:01:16.497586   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 73/120
	I0930 21:01:17.499446   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 74/120
	I0930 21:01:18.501676   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 75/120
	I0930 21:01:19.503576   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 76/120
	I0930 21:01:20.505001   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 77/120
	I0930 21:01:21.506734   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 78/120
	I0930 21:01:22.508142   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 79/120
	I0930 21:01:23.509543   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 80/120
	I0930 21:01:24.511113   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 81/120
	I0930 21:01:25.512638   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 82/120
	I0930 21:01:26.514111   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 83/120
	I0930 21:01:27.515590   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 84/120
	I0930 21:01:28.517619   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 85/120
	I0930 21:01:29.519156   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 86/120
	I0930 21:01:30.520755   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 87/120
	I0930 21:01:31.522344   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 88/120
	I0930 21:01:32.524005   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 89/120
	I0930 21:01:33.526315   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 90/120
	I0930 21:01:34.528094   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 91/120
	I0930 21:01:35.529600   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 92/120
	I0930 21:01:36.531415   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 93/120
	I0930 21:01:37.533122   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 94/120
	I0930 21:01:38.535348   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 95/120
	I0930 21:01:39.536835   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 96/120
	I0930 21:01:40.538269   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 97/120
	I0930 21:01:41.540208   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 98/120
	I0930 21:01:42.542344   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 99/120
	I0930 21:01:43.545027   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 100/120
	I0930 21:01:44.546531   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 101/120
	I0930 21:01:45.548054   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 102/120
	I0930 21:01:46.549707   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 103/120
	I0930 21:01:47.551270   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 104/120
	I0930 21:01:48.553813   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 105/120
	I0930 21:01:49.555424   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 106/120
	I0930 21:01:50.557017   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 107/120
	I0930 21:01:51.558651   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 108/120
	I0930 21:01:52.560484   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 109/120
	I0930 21:01:53.562201   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 110/120
	I0930 21:01:54.564069   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 111/120
	I0930 21:01:55.565672   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 112/120
	I0930 21:01:56.567135   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 113/120
	I0930 21:01:57.568711   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 114/120
	I0930 21:01:58.571028   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 115/120
	I0930 21:01:59.572601   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 116/120
	I0930 21:02:00.574032   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 117/120
	I0930 21:02:01.576090   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 118/120
	I0930 21:02:02.578499   72195 main.go:141] libmachine: (embed-certs-256103) Waiting for machine to stop 119/120
	I0930 21:02:03.579497   72195 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0930 21:02:03.579594   72195 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0930 21:02:03.581668   72195 out.go:201] 
	W0930 21:02:03.583299   72195 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0930 21:02:03.583323   72195 out.go:270] * 
	* 
	W0930 21:02:03.586072   72195 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 21:02:03.587650   72195 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-256103 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256103 -n embed-certs-256103
E0930 21:02:06.012483   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256103 -n embed-certs-256103: exit status 3 (18.470676407s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 21:02:22.059950   72856 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host
	E0930 21:02:22.059976   72856 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-256103" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-997816 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-997816 --alsologtostderr -v=3: exit status 82 (2m0.556080701s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-997816"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 21:00:13.474564   72340 out.go:345] Setting OutFile to fd 1 ...
	I0930 21:00:13.474830   72340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:00:13.474842   72340 out.go:358] Setting ErrFile to fd 2...
	I0930 21:00:13.474849   72340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:00:13.475066   72340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 21:00:13.475352   72340 out.go:352] Setting JSON to false
	I0930 21:00:13.475448   72340 mustload.go:65] Loading cluster: no-preload-997816
	I0930 21:00:13.475839   72340 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:00:13.475928   72340 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/config.json ...
	I0930 21:00:13.476112   72340 mustload.go:65] Loading cluster: no-preload-997816
	I0930 21:00:13.476251   72340 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:00:13.476291   72340 stop.go:39] StopHost: no-preload-997816
	I0930 21:00:13.476686   72340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:00:13.476735   72340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:00:13.492480   72340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34689
	I0930 21:00:13.493032   72340 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:00:13.493615   72340 main.go:141] libmachine: Using API Version  1
	I0930 21:00:13.493647   72340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:00:13.494068   72340 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:00:13.496538   72340 out.go:177] * Stopping node "no-preload-997816"  ...
	I0930 21:00:13.498095   72340 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 21:00:13.498129   72340 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:00:13.498371   72340 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 21:00:13.498404   72340 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:00:13.501825   72340 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:00:13.502239   72340 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 21:58:38 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:00:13.502277   72340 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:00:13.502419   72340 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:00:13.502589   72340 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:00:13.502738   72340 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:00:13.502908   72340 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:00:13.628239   72340 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0930 21:00:13.694226   72340 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0930 21:00:13.763092   72340 main.go:141] libmachine: Stopping "no-preload-997816"...
	I0930 21:00:13.763129   72340 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:00:13.764908   72340 main.go:141] libmachine: (no-preload-997816) Calling .Stop
	I0930 21:00:13.768990   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 0/120
	I0930 21:00:14.770209   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 1/120
	I0930 21:00:15.772017   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 2/120
	I0930 21:00:16.773463   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 3/120
	I0930 21:00:17.774924   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 4/120
	I0930 21:00:18.777118   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 5/120
	I0930 21:00:19.778843   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 6/120
	I0930 21:00:20.780898   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 7/120
	I0930 21:00:21.782368   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 8/120
	I0930 21:00:22.784015   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 9/120
	I0930 21:00:23.786177   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 10/120
	I0930 21:00:24.787650   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 11/120
	I0930 21:00:25.789196   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 12/120
	I0930 21:00:26.791436   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 13/120
	I0930 21:00:27.792914   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 14/120
	I0930 21:00:28.795127   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 15/120
	I0930 21:00:29.796779   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 16/120
	I0930 21:00:30.798352   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 17/120
	I0930 21:00:31.799889   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 18/120
	I0930 21:00:32.801510   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 19/120
	I0930 21:00:33.804120   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 20/120
	I0930 21:00:34.805718   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 21/120
	I0930 21:00:35.807498   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 22/120
	I0930 21:00:36.809205   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 23/120
	I0930 21:00:37.810572   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 24/120
	I0930 21:00:38.812629   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 25/120
	I0930 21:00:39.814276   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 26/120
	I0930 21:00:40.815915   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 27/120
	I0930 21:00:41.817192   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 28/120
	I0930 21:00:42.818713   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 29/120
	I0930 21:00:43.821079   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 30/120
	I0930 21:00:44.822717   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 31/120
	I0930 21:00:45.824147   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 32/120
	I0930 21:00:46.826094   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 33/120
	I0930 21:00:47.827649   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 34/120
	I0930 21:00:48.829721   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 35/120
	I0930 21:00:49.831281   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 36/120
	I0930 21:00:50.833496   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 37/120
	I0930 21:00:51.834874   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 38/120
	I0930 21:00:52.836336   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 39/120
	I0930 21:00:53.837791   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 40/120
	I0930 21:00:54.839442   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 41/120
	I0930 21:00:55.840979   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 42/120
	I0930 21:00:56.842368   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 43/120
	I0930 21:00:57.843797   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 44/120
	I0930 21:00:58.845964   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 45/120
	I0930 21:00:59.847436   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 46/120
	I0930 21:01:00.849186   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 47/120
	I0930 21:01:01.850837   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 48/120
	I0930 21:01:02.852305   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 49/120
	I0930 21:01:03.853795   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 50/120
	I0930 21:01:04.855401   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 51/120
	I0930 21:01:05.857044   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 52/120
	I0930 21:01:06.858454   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 53/120
	I0930 21:01:07.859909   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 54/120
	I0930 21:01:08.861963   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 55/120
	I0930 21:01:09.863691   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 56/120
	I0930 21:01:10.865375   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 57/120
	I0930 21:01:11.867470   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 58/120
	I0930 21:01:12.869081   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 59/120
	I0930 21:01:13.871523   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 60/120
	I0930 21:01:14.873169   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 61/120
	I0930 21:01:15.874463   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 62/120
	I0930 21:01:16.876193   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 63/120
	I0930 21:01:17.878501   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 64/120
	I0930 21:01:18.880614   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 65/120
	I0930 21:01:19.882047   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 66/120
	I0930 21:01:20.883681   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 67/120
	I0930 21:01:21.886084   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 68/120
	I0930 21:01:22.887680   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 69/120
	I0930 21:01:23.890056   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 70/120
	I0930 21:01:24.891514   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 71/120
	I0930 21:01:25.892960   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 72/120
	I0930 21:01:26.894629   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 73/120
	I0930 21:01:27.896126   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 74/120
	I0930 21:01:28.898109   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 75/120
	I0930 21:01:29.899799   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 76/120
	I0930 21:01:30.902120   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 77/120
	I0930 21:01:31.903707   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 78/120
	I0930 21:01:32.905070   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 79/120
	I0930 21:01:33.907554   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 80/120
	I0930 21:01:34.909055   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 81/120
	I0930 21:01:35.910730   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 82/120
	I0930 21:01:36.912179   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 83/120
	I0930 21:01:37.914023   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 84/120
	I0930 21:01:38.916290   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 85/120
	I0930 21:01:39.917873   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 86/120
	I0930 21:01:40.919703   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 87/120
	I0930 21:01:41.921067   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 88/120
	I0930 21:01:42.922486   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 89/120
	I0930 21:01:43.924712   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 90/120
	I0930 21:01:44.926569   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 91/120
	I0930 21:01:45.928213   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 92/120
	I0930 21:01:46.930418   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 93/120
	I0930 21:01:47.931822   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 94/120
	I0930 21:01:48.934195   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 95/120
	I0930 21:01:49.935580   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 96/120
	I0930 21:01:50.937059   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 97/120
	I0930 21:01:51.938377   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 98/120
	I0930 21:01:52.940175   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 99/120
	I0930 21:01:53.942470   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 100/120
	I0930 21:01:54.943976   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 101/120
	I0930 21:01:55.946041   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 102/120
	I0930 21:01:56.947663   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 103/120
	I0930 21:01:57.948833   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 104/120
	I0930 21:01:58.950827   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 105/120
	I0930 21:01:59.952254   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 106/120
	I0930 21:02:00.955006   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 107/120
	I0930 21:02:01.956741   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 108/120
	I0930 21:02:02.958862   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 109/120
	I0930 21:02:03.960967   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 110/120
	I0930 21:02:04.962612   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 111/120
	I0930 21:02:05.964109   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 112/120
	I0930 21:02:06.966181   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 113/120
	I0930 21:02:07.967769   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 114/120
	I0930 21:02:08.969886   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 115/120
	I0930 21:02:09.971346   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 116/120
	I0930 21:02:10.972775   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 117/120
	I0930 21:02:11.974351   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 118/120
	I0930 21:02:12.975769   72340 main.go:141] libmachine: (no-preload-997816) Waiting for machine to stop 119/120
	I0930 21:02:13.977245   72340 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0930 21:02:13.977330   72340 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0930 21:02:13.979399   72340 out.go:201] 
	W0930 21:02:13.981320   72340 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0930 21:02:13.981339   72340 out.go:270] * 
	* 
	W0930 21:02:13.983882   72340 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 21:02:13.985493   72340 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-997816 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997816 -n no-preload-997816
E0930 21:02:14.224283   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:15.566555   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:15.857521   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997816 -n no-preload-997816: exit status 3 (18.56893049s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 21:02:32.555858   72921 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host
	E0930 21:02:32.555877   72921 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-997816" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-291511 --alsologtostderr -v=3
E0930 21:01:02.540263   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:12.781611   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:33.263008   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:34.590423   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:34.596798   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:34.608134   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:34.629545   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:34.670963   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:34.752442   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:34.914321   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:35.235609   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:35.877519   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:37.158771   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:39.720446   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:44.842414   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:55.084444   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:55.759556   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:55.766016   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:55.777391   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:55.798880   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:55.840287   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:55.921789   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:56.083517   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:56.405370   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:57.047066   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:01:58.328675   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:00.890732   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-291511 --alsologtostderr -v=3: exit status 82 (2m0.520843778s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-291511"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 21:00:58.827035   72605 out.go:345] Setting OutFile to fd 1 ...
	I0930 21:00:58.827285   72605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:00:58.827294   72605 out.go:358] Setting ErrFile to fd 2...
	I0930 21:00:58.827297   72605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:00:58.827464   72605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 21:00:58.827728   72605 out.go:352] Setting JSON to false
	I0930 21:00:58.827821   72605 mustload.go:65] Loading cluster: default-k8s-diff-port-291511
	I0930 21:00:58.828301   72605 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:00:58.828396   72605 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/config.json ...
	I0930 21:00:58.828610   72605 mustload.go:65] Loading cluster: default-k8s-diff-port-291511
	I0930 21:00:58.828738   72605 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:00:58.828763   72605 stop.go:39] StopHost: default-k8s-diff-port-291511
	I0930 21:00:58.829151   72605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:00:58.829195   72605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:00:58.845362   72605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0930 21:00:58.845932   72605 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:00:58.846507   72605 main.go:141] libmachine: Using API Version  1
	I0930 21:00:58.846530   72605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:00:58.846954   72605 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:00:58.849367   72605 out.go:177] * Stopping node "default-k8s-diff-port-291511"  ...
	I0930 21:00:58.850917   72605 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 21:00:58.850945   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:00:58.851165   72605 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 21:00:58.851186   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:00:58.854128   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:00:58.854511   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 21:59:33 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:00:58.854541   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:00:58.854714   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:00:58.854878   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:00:58.855003   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:00:58.855119   72605 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:00:58.959914   72605 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0930 21:00:59.020484   72605 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0930 21:00:59.087849   72605 main.go:141] libmachine: Stopping "default-k8s-diff-port-291511"...
	I0930 21:00:59.087879   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:00:59.089917   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Stop
	I0930 21:00:59.093776   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 0/120
	I0930 21:01:00.095183   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 1/120
	I0930 21:01:01.096947   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 2/120
	I0930 21:01:02.098599   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 3/120
	I0930 21:01:03.100127   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 4/120
	I0930 21:01:04.102649   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 5/120
	I0930 21:01:05.104196   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 6/120
	I0930 21:01:06.105574   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 7/120
	I0930 21:01:07.107178   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 8/120
	I0930 21:01:08.108675   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 9/120
	I0930 21:01:09.110537   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 10/120
	I0930 21:01:10.112066   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 11/120
	I0930 21:01:11.113592   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 12/120
	I0930 21:01:12.115467   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 13/120
	I0930 21:01:13.117250   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 14/120
	I0930 21:01:14.119613   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 15/120
	I0930 21:01:15.121170   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 16/120
	I0930 21:01:16.122790   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 17/120
	I0930 21:01:17.124808   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 18/120
	I0930 21:01:18.126396   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 19/120
	I0930 21:01:19.128220   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 20/120
	I0930 21:01:20.129824   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 21/120
	I0930 21:01:21.131577   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 22/120
	I0930 21:01:22.133089   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 23/120
	I0930 21:01:23.134589   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 24/120
	I0930 21:01:24.136751   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 25/120
	I0930 21:01:25.138103   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 26/120
	I0930 21:01:26.139760   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 27/120
	I0930 21:01:27.141129   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 28/120
	I0930 21:01:28.142530   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 29/120
	I0930 21:01:29.144046   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 30/120
	I0930 21:01:30.145618   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 31/120
	I0930 21:01:31.147129   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 32/120
	I0930 21:01:32.149028   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 33/120
	I0930 21:01:33.150607   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 34/120
	I0930 21:01:34.152912   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 35/120
	I0930 21:01:35.154273   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 36/120
	I0930 21:01:36.156021   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 37/120
	I0930 21:01:37.157552   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 38/120
	I0930 21:01:38.158887   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 39/120
	I0930 21:01:39.161237   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 40/120
	I0930 21:01:40.162649   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 41/120
	I0930 21:01:41.164203   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 42/120
	I0930 21:01:42.165743   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 43/120
	I0930 21:01:43.167090   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 44/120
	I0930 21:01:44.169311   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 45/120
	I0930 21:01:45.170673   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 46/120
	I0930 21:01:46.172184   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 47/120
	I0930 21:01:47.174376   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 48/120
	I0930 21:01:48.175944   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 49/120
	I0930 21:01:49.178350   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 50/120
	I0930 21:01:50.179561   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 51/120
	I0930 21:01:51.181129   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 52/120
	I0930 21:01:52.182429   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 53/120
	I0930 21:01:53.183806   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 54/120
	I0930 21:01:54.186277   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 55/120
	I0930 21:01:55.187672   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 56/120
	I0930 21:01:56.188851   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 57/120
	I0930 21:01:57.190645   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 58/120
	I0930 21:01:58.192204   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 59/120
	I0930 21:01:59.193622   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 60/120
	I0930 21:02:00.194972   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 61/120
	I0930 21:02:01.196471   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 62/120
	I0930 21:02:02.198076   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 63/120
	I0930 21:02:03.199726   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 64/120
	I0930 21:02:04.201911   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 65/120
	I0930 21:02:05.203818   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 66/120
	I0930 21:02:06.205276   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 67/120
	I0930 21:02:07.206638   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 68/120
	I0930 21:02:08.208126   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 69/120
	I0930 21:02:09.210569   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 70/120
	I0930 21:02:10.212106   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 71/120
	I0930 21:02:11.214210   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 72/120
	I0930 21:02:12.216144   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 73/120
	I0930 21:02:13.217562   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 74/120
	I0930 21:02:14.220006   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 75/120
	I0930 21:02:15.221806   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 76/120
	I0930 21:02:16.223067   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 77/120
	I0930 21:02:17.224687   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 78/120
	I0930 21:02:18.226197   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 79/120
	I0930 21:02:19.227583   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 80/120
	I0930 21:02:20.229130   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 81/120
	I0930 21:02:21.230643   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 82/120
	I0930 21:02:22.231919   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 83/120
	I0930 21:02:23.233616   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 84/120
	I0930 21:02:24.235813   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 85/120
	I0930 21:02:25.238386   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 86/120
	I0930 21:02:26.239978   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 87/120
	I0930 21:02:27.241483   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 88/120
	I0930 21:02:28.243151   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 89/120
	I0930 21:02:29.244684   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 90/120
	I0930 21:02:30.246742   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 91/120
	I0930 21:02:31.248320   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 92/120
	I0930 21:02:32.249844   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 93/120
	I0930 21:02:33.251240   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 94/120
	I0930 21:02:34.253463   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 95/120
	I0930 21:02:35.255012   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 96/120
	I0930 21:02:36.256518   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 97/120
	I0930 21:02:37.258262   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 98/120
	I0930 21:02:38.259758   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 99/120
	I0930 21:02:39.262500   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 100/120
	I0930 21:02:40.264359   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 101/120
	I0930 21:02:41.265870   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 102/120
	I0930 21:02:42.267598   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 103/120
	I0930 21:02:43.268966   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 104/120
	I0930 21:02:44.271420   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 105/120
	I0930 21:02:45.273297   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 106/120
	I0930 21:02:46.274673   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 107/120
	I0930 21:02:47.276458   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 108/120
	I0930 21:02:48.278092   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 109/120
	I0930 21:02:49.279612   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 110/120
	I0930 21:02:50.281303   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 111/120
	I0930 21:02:51.282929   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 112/120
	I0930 21:02:52.284478   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 113/120
	I0930 21:02:53.286235   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 114/120
	I0930 21:02:54.288717   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 115/120
	I0930 21:02:55.290240   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 116/120
	I0930 21:02:56.291928   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 117/120
	I0930 21:02:57.293342   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 118/120
	I0930 21:02:58.294919   72605 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for machine to stop 119/120
	I0930 21:02:59.296471   72605 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0930 21:02:59.296524   72605 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0930 21:02:59.298460   72605 out.go:201] 
	W0930 21:02:59.299873   72605 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0930 21:02:59.299896   72605 out.go:270] * 
	* 
	W0930 21:02:59.302388   72605 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 21:02:59.303691   72605 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-291511 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-291511 -n default-k8s-diff-port-291511
E0930 21:03:02.093705   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:08.419726   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:08.426128   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:08.437521   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:08.458952   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:08.500373   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:08.581856   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:08.743506   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:09.065535   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:09.707724   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:10.989340   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:12.335038   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:13.551612   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:15.484244   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:15.490643   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:15.502060   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:15.523474   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:15.564911   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:15.646641   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:15.808477   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:16.130249   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:16.772354   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:17.697658   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-291511 -n default-k8s-diff-port-291511: exit status 3 (18.562935781s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 21:03:17.867906   73466 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host
	E0930 21:03:17.867927   73466 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-291511" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-621406 create -f testdata/busybox.yaml
E0930 21:02:16.254641   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-621406 create -f testdata/busybox.yaml: exit status 1 (43.866957ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-621406" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-621406 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406: exit status 6 (211.893969ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 21:02:16.490469   72993 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-621406" does not appear in /home/jenkins/minikube-integration/19736-7672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-621406" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406: exit status 6 (216.924362ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 21:02:16.709575   73023 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-621406" does not appear in /home/jenkins/minikube-integration/19736-7672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-621406" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (79.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-621406 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0930 21:02:18.381684   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-621406 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m19.237374069s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-621406 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-621406 describe deploy/metrics-server -n kube-system
E0930 21:03:35.978581   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-621406 describe deploy/metrics-server -n kube-system: exit status 1 (48.409615ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-621406" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-621406 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406
E0930 21:03:36.146270   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406: exit status 6 (216.451492ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 21:03:36.210692   73768 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-621406" does not appear in /home/jenkins/minikube-integration/19736-7672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-621406" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (79.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256103 -n embed-certs-256103
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256103 -n embed-certs-256103: exit status 3 (3.167833974s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 21:02:25.227922   73098 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host
	E0930 21:02:25.227940   73098 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-256103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-256103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154389792s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-256103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256103 -n embed-certs-256103
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256103 -n embed-certs-256103: exit status 3 (3.06137484s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 21:02:34.443964   73181 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host
	E0930 21:02:34.443995   73181 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-256103" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997816 -n no-preload-997816
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997816 -n no-preload-997816: exit status 3 (3.167897976s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 21:02:35.723899   73211 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host
	E0930 21:02:35.723922   73211 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-997816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0930 21:02:36.736246   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-997816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15387644s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-997816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997816 -n no-preload-997816
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997816 -n no-preload-997816: exit status 3 (3.062355903s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 21:02:44.940018   73329 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host
	E0930 21:02:44.940038   73329 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-997816" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-291511 -n default-k8s-diff-port-291511
E0930 21:03:18.053716   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:18.672901   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:20.615493   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-291511 -n default-k8s-diff-port-291511: exit status 3 (3.16801044s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 21:03:21.036021   73582 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host
	E0930 21:03:21.036050   73582 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-291511 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0930 21:03:25.737203   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-291511 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154525265s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-291511 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-291511 -n default-k8s-diff-port-291511
E0930 21:03:28.914310   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:28.935891   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-291511 -n default-k8s-diff-port-291511: exit status 3 (3.061462971s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 21:03:30.252012   73661 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host
	E0930 21:03:30.252036   73661 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-291511" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (756.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-621406 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0930 21:03:49.396613   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:03:56.459859   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:04:13.779372   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:04:18.450733   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:04:30.358135   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:04:31.997913   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:04:37.421984   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:04:39.619257   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:04:59.699848   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:05:35.701216   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:05:52.280131   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:05:52.286641   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:05:55.310860   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:05:59.343420   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:06:19.987700   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:06:34.591464   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:06:55.759866   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:07:02.292463   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:07:23.460895   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:07:51.838993   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:08:08.419704   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:08:15.484616   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:08:19.542877   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:08:28.936245   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:08:36.121846   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:08:43.185470   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:09:31.997393   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:09:52.005558   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:10:52.287297   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:10:55.311481   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:11:34.590543   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:11:55.759488   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-621406 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m32.963322447s)

                                                
                                                
-- stdout --
	* [old-k8s-version-621406] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-621406" primary control-plane node in "old-k8s-version-621406" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-621406" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 21:03:42.750102   73900 out.go:345] Setting OutFile to fd 1 ...
	I0930 21:03:42.750367   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750377   73900 out.go:358] Setting ErrFile to fd 2...
	I0930 21:03:42.750383   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750578   73900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 21:03:42.751109   73900 out.go:352] Setting JSON to false
	I0930 21:03:42.752040   73900 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6366,"bootTime":1727723857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 21:03:42.752140   73900 start.go:139] virtualization: kvm guest
	I0930 21:03:42.754146   73900 out.go:177] * [old-k8s-version-621406] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 21:03:42.755446   73900 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 21:03:42.755456   73900 notify.go:220] Checking for updates...
	I0930 21:03:42.758261   73900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 21:03:42.759566   73900 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:03:42.760907   73900 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 21:03:42.762342   73900 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 21:03:42.763561   73900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 21:03:42.765356   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:03:42.765773   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.765822   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.780605   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45071
	I0930 21:03:42.781022   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.781550   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.781583   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.781912   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.782160   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.784603   73900 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 21:03:42.785760   73900 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 21:03:42.786115   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.786156   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.800937   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0930 21:03:42.801409   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.801882   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.801905   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.802216   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.802397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.838423   73900 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 21:03:42.839832   73900 start.go:297] selected driver: kvm2
	I0930 21:03:42.839847   73900 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.839953   73900 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 21:03:42.840605   73900 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.840667   73900 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 21:03:42.856119   73900 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 21:03:42.856550   73900 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:03:42.856580   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:03:42.856630   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:03:42.856665   73900 start.go:340] cluster config:
	{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.856778   73900 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.858732   73900 out.go:177] * Starting "old-k8s-version-621406" primary control-plane node in "old-k8s-version-621406" cluster
	I0930 21:03:42.859876   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:03:42.859912   73900 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 21:03:42.859929   73900 cache.go:56] Caching tarball of preloaded images
	I0930 21:03:42.860020   73900 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 21:03:42.860031   73900 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0930 21:03:42.860153   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:03:42.860340   73900 start.go:360] acquireMachinesLock for old-k8s-version-621406: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:07:49.889033   73900 start.go:364] duration metric: took 4m7.028659379s to acquireMachinesLock for "old-k8s-version-621406"
	I0930 21:07:49.889104   73900 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:49.889111   73900 fix.go:54] fixHost starting: 
	I0930 21:07:49.889542   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:49.889600   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:49.906767   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0930 21:07:49.907283   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:49.907856   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:07:49.907889   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:49.908203   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:49.908397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:07:49.908542   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetState
	I0930 21:07:49.910270   73900 fix.go:112] recreateIfNeeded on old-k8s-version-621406: state=Stopped err=<nil>
	I0930 21:07:49.910306   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	W0930 21:07:49.910441   73900 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:49.912646   73900 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-621406" ...
	I0930 21:07:49.914747   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .Start
	I0930 21:07:49.914948   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring networks are active...
	I0930 21:07:49.915796   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network default is active
	I0930 21:07:49.916225   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network mk-old-k8s-version-621406 is active
	I0930 21:07:49.916890   73900 main.go:141] libmachine: (old-k8s-version-621406) Getting domain xml...
	I0930 21:07:49.917688   73900 main.go:141] libmachine: (old-k8s-version-621406) Creating domain...
	I0930 21:07:51.277867   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting to get IP...
	I0930 21:07:51.279001   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.279451   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.279552   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.279437   74917 retry.go:31] will retry after 307.582619ms: waiting for machine to come up
	I0930 21:07:51.589030   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.589414   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.589445   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.589368   74917 retry.go:31] will retry after 370.683214ms: waiting for machine to come up
	I0930 21:07:51.961914   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.962474   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.962511   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.962415   74917 retry.go:31] will retry after 428.703419ms: waiting for machine to come up
	I0930 21:07:52.393154   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.393682   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.393750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.393673   74917 retry.go:31] will retry after 514.254023ms: waiting for machine to come up
	I0930 21:07:52.909622   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.910169   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.910202   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.910132   74917 retry.go:31] will retry after 605.019848ms: waiting for machine to come up
	I0930 21:07:53.517276   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:53.517911   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:53.517943   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:53.517858   74917 retry.go:31] will retry after 856.018614ms: waiting for machine to come up
	I0930 21:07:54.376343   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:54.376838   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:54.376862   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:54.376794   74917 retry.go:31] will retry after 740.749778ms: waiting for machine to come up
	I0930 21:07:55.119090   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:55.119631   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:55.119660   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:55.119583   74917 retry.go:31] will retry after 1.444139076s: waiting for machine to come up
	I0930 21:07:56.566261   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:56.566744   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:56.566771   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:56.566695   74917 retry.go:31] will retry after 1.681362023s: waiting for machine to come up
	I0930 21:07:58.250468   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:58.251041   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:58.251062   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:58.250979   74917 retry.go:31] will retry after 2.260492343s: waiting for machine to come up
	I0930 21:08:00.513613   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:00.514129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:00.514194   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:00.514117   74917 retry.go:31] will retry after 2.449694064s: waiting for machine to come up
	I0930 21:08:02.965767   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:02.966135   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:02.966157   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:02.966086   74917 retry.go:31] will retry after 2.951226221s: waiting for machine to come up
	I0930 21:08:05.919389   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:05.919894   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:05.919937   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:05.919827   74917 retry.go:31] will retry after 2.747969391s: waiting for machine to come up
	I0930 21:08:08.671179   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.671686   73900 main.go:141] libmachine: (old-k8s-version-621406) Found IP for machine: 192.168.72.159
	I0930 21:08:08.671711   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserving static IP address...
	I0930 21:08:08.671729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has current primary IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.672178   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.672220   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | skip adding static IP to network mk-old-k8s-version-621406 - found existing host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"}
	I0930 21:08:08.672231   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserved static IP address: 192.168.72.159
	I0930 21:08:08.672246   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting for SSH to be available...
	I0930 21:08:08.672254   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Getting to WaitForSSH function...
	I0930 21:08:08.674566   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.674931   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.674969   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.675128   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH client type: external
	I0930 21:08:08.675170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa (-rw-------)
	I0930 21:08:08.675212   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:08:08.675229   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | About to run SSH command:
	I0930 21:08:08.675244   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | exit 0
	I0930 21:08:08.799368   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | SSH cmd err, output: <nil>: 
	I0930 21:08:08.799751   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetConfigRaw
	I0930 21:08:08.800421   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:08.803151   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803596   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.803620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803922   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:08:08.804195   73900 machine.go:93] provisionDockerMachine start ...
	I0930 21:08:08.804246   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:08.804502   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.806822   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807240   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.807284   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807521   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.807735   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.807890   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.808077   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.808239   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.808480   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.808493   73900 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:08:08.912058   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:08:08.912135   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912407   73900 buildroot.go:166] provisioning hostname "old-k8s-version-621406"
	I0930 21:08:08.912432   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912662   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.915366   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.915750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915892   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.916107   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916330   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916492   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.916673   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.916932   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.916957   73900 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-621406 && echo "old-k8s-version-621406" | sudo tee /etc/hostname
	I0930 21:08:09.034260   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-621406
	
	I0930 21:08:09.034296   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.037149   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037509   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.037538   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037799   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.037986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038163   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038327   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.038473   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.038695   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.038714   73900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-621406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-621406/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-621406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:08:09.152190   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:08:09.152228   73900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:08:09.152255   73900 buildroot.go:174] setting up certificates
	I0930 21:08:09.152275   73900 provision.go:84] configureAuth start
	I0930 21:08:09.152288   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:09.152577   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.155203   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155589   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.155620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155783   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.157964   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158362   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.158392   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158520   73900 provision.go:143] copyHostCerts
	I0930 21:08:09.158592   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:08:09.158605   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:08:09.158704   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:08:09.158851   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:08:09.158864   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:08:09.158895   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:08:09.158970   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:08:09.158977   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:08:09.158996   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:08:09.159054   73900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-621406 san=[127.0.0.1 192.168.72.159 localhost minikube old-k8s-version-621406]
	I0930 21:08:09.301267   73900 provision.go:177] copyRemoteCerts
	I0930 21:08:09.301322   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:08:09.301349   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.304344   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304766   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.304796   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304998   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.305187   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.305321   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.305439   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.390851   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0930 21:08:09.415712   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 21:08:09.439567   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:08:09.463427   73900 provision.go:87] duration metric: took 311.139024ms to configureAuth
	I0930 21:08:09.463459   73900 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:08:09.463713   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:08:09.463809   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.466757   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.467160   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467326   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.467513   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467694   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467843   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.468004   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.468175   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.468190   73900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:08:09.684657   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:08:09.684684   73900 machine.go:96] duration metric: took 880.473418ms to provisionDockerMachine
	I0930 21:08:09.684698   73900 start.go:293] postStartSetup for "old-k8s-version-621406" (driver="kvm2")
	I0930 21:08:09.684709   73900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:08:09.684730   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.685075   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:08:09.685114   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.688051   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688517   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.688542   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688725   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.688928   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.689070   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.689265   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.770572   73900 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:08:09.775149   73900 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:08:09.775181   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:08:09.775268   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:08:09.775364   73900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:08:09.775453   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:08:09.784753   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:09.807989   73900 start.go:296] duration metric: took 123.276522ms for postStartSetup
	I0930 21:08:09.808033   73900 fix.go:56] duration metric: took 19.918922935s for fixHost
	I0930 21:08:09.808053   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.811242   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811656   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.811692   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811852   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.812064   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812239   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812380   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.812522   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.812704   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.812719   73900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:08:09.916349   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730489.889323893
	
	I0930 21:08:09.916376   73900 fix.go:216] guest clock: 1727730489.889323893
	I0930 21:08:09.916384   73900 fix.go:229] Guest: 2024-09-30 21:08:09.889323893 +0000 UTC Remote: 2024-09-30 21:08:09.808037625 +0000 UTC m=+267.093327666 (delta=81.286268ms)
	I0930 21:08:09.916403   73900 fix.go:200] guest clock delta is within tolerance: 81.286268ms
	I0930 21:08:09.916408   73900 start.go:83] releasing machines lock for "old-k8s-version-621406", held for 20.027328296s
	I0930 21:08:09.916440   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.916766   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.919729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920070   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.920105   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920238   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.920831   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921050   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921182   73900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:08:09.921235   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.921328   73900 ssh_runner.go:195] Run: cat /version.json
	I0930 21:08:09.921351   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.924258   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924650   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.924695   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924805   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.924986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.925176   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925206   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.925341   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.925405   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.925534   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925698   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925829   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:10.043500   73900 ssh_runner.go:195] Run: systemctl --version
	I0930 21:08:10.051029   73900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:08:10.199844   73900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:08:10.206433   73900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:08:10.206519   73900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:08:10.223346   73900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:08:10.223375   73900 start.go:495] detecting cgroup driver to use...
	I0930 21:08:10.223449   73900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:08:10.241056   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:08:10.257197   73900 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:08:10.257261   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:08:10.271847   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:08:10.287465   73900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:08:10.419248   73900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:08:10.583440   73900 docker.go:233] disabling docker service ...
	I0930 21:08:10.583518   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:08:10.599561   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:08:10.613321   73900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:08:10.763071   73900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:08:10.891222   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:08:10.906985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:08:10.927838   73900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0930 21:08:10.927911   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.940002   73900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:08:10.940084   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.953143   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.965922   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.985782   73900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:08:11.001825   73900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:08:11.015777   73900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:08:11.015835   73900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:08:11.034821   73900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:08:11.049855   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:11.203755   73900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:08:11.312949   73900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:08:11.313060   73900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:08:11.319280   73900 start.go:563] Will wait 60s for crictl version
	I0930 21:08:11.319355   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:11.323826   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:08:11.374934   73900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:08:11.375023   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.415466   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.449622   73900 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0930 21:08:11.450773   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:11.454019   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454504   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:11.454534   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454807   73900 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0930 21:08:11.459034   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:11.473162   73900 kubeadm.go:883] updating cluster {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:08:11.473294   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:08:11.473367   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:11.518200   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:11.518275   73900 ssh_runner.go:195] Run: which lz4
	I0930 21:08:11.522442   73900 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:08:11.526704   73900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:08:11.526752   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0930 21:08:13.134916   73900 crio.go:462] duration metric: took 1.612498859s to copy over tarball
	I0930 21:08:13.135038   73900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:08:16.170053   73900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.034985922s)
	I0930 21:08:16.170080   73900 crio.go:469] duration metric: took 3.035125251s to extract the tarball
	I0930 21:08:16.170088   73900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:08:16.213559   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:16.249853   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:16.249876   73900 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 21:08:16.249943   73900 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.249970   73900 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.249987   73900 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.250030   73900 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0930 21:08:16.250031   73900 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.250047   73900 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.250049   73900 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.250083   73900 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0930 21:08:16.251771   73900 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.251768   73900 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.251832   73900 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.251854   73900 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251891   73900 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.252031   73900 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.456847   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.468006   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0930 21:08:16.516253   73900 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0930 21:08:16.516294   73900 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.516336   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.524699   73900 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0930 21:08:16.524743   73900 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0930 21:08:16.524787   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.525738   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.529669   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.561946   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.569090   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.570589   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.571007   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.581971   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.587609   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.630323   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.711058   73900 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0930 21:08:16.711124   73900 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.711190   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.749473   73900 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0930 21:08:16.749521   73900 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.749585   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.769974   73900 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0930 21:08:16.770016   73900 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.770050   73900 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0930 21:08:16.770075   73900 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0930 21:08:16.770087   73900 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.770104   73900 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.770142   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770160   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.770064   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770144   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.788241   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.788292   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.788294   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.788339   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.847727   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0930 21:08:16.847798   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.847894   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.938964   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.939000   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.939053   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0930 21:08:16.939090   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.965556   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.965620   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.020497   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:17.074893   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:17.074950   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.090489   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:17.174117   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0930 21:08:17.174183   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0930 21:08:17.185553   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0930 21:08:17.185619   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0930 21:08:17.506064   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:17.650598   73900 cache_images.go:92] duration metric: took 1.400704992s to LoadCachedImages
	W0930 21:08:17.650695   73900 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0930 21:08:17.650710   73900 kubeadm.go:934] updating node { 192.168.72.159 8443 v1.20.0 crio true true} ...
	I0930 21:08:17.650834   73900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-621406 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:08:17.650922   73900 ssh_runner.go:195] Run: crio config
	I0930 21:08:17.710096   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:08:17.710124   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:17.710139   73900 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:08:17.710164   73900 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-621406 NodeName:old-k8s-version-621406 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0930 21:08:17.710349   73900 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-621406"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:08:17.710425   73900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0930 21:08:17.721028   73900 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:08:17.721111   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:08:17.731462   73900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0930 21:08:17.749715   73900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:08:17.767565   73900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0930 21:08:17.786411   73900 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0930 21:08:17.790338   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:17.803957   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:17.948898   73900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:17.969102   73900 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406 for IP: 192.168.72.159
	I0930 21:08:17.969133   73900 certs.go:194] generating shared ca certs ...
	I0930 21:08:17.969150   73900 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:17.969338   73900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:08:17.969387   73900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:08:17.969400   73900 certs.go:256] generating profile certs ...
	I0930 21:08:17.969543   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/client.key
	I0930 21:08:17.969621   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key.f3dc5056
	I0930 21:08:17.969674   73900 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key
	I0930 21:08:17.969833   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:08:17.969875   73900 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:08:17.969886   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:08:17.969926   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:08:17.969961   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:08:17.969999   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:08:17.970055   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:17.970794   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:08:18.007954   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:08:18.041538   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:08:18.077886   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:08:18.118644   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0930 21:08:18.151418   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:08:18.199572   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:08:18.235795   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:08:18.272729   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:08:18.298727   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:08:18.324074   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:08:18.351209   73900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:08:18.372245   73900 ssh_runner.go:195] Run: openssl version
	I0930 21:08:18.380047   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:08:18.395332   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401407   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401479   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.407744   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:08:18.422801   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:08:18.437946   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443864   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443938   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.451554   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:08:18.466856   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:08:18.479324   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484321   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484383   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.490341   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:08:18.503117   73900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:08:18.507986   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:08:18.514974   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:08:18.522140   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:08:18.529366   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:08:18.536056   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:08:18.542787   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:08:18.550311   73900 kubeadm.go:392] StartCluster: {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:08:18.550431   73900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:08:18.550498   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.593041   73900 cri.go:89] found id: ""
	I0930 21:08:18.593116   73900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:08:18.603410   73900 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:08:18.603432   73900 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:08:18.603479   73900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:08:18.614635   73900 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:08:18.615758   73900 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-621406" does not appear in /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:08:18.616488   73900 kubeconfig.go:62] /home/jenkins/minikube-integration/19736-7672/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-621406" cluster setting kubeconfig missing "old-k8s-version-621406" context setting]
	I0930 21:08:18.617394   73900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:18.644144   73900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:08:18.655764   73900 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.159
	I0930 21:08:18.655806   73900 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:08:18.655819   73900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:08:18.655877   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.699283   73900 cri.go:89] found id: ""
	I0930 21:08:18.699376   73900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:08:18.715248   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:08:18.724905   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:08:18.724945   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:08:18.724990   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:08:18.735611   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:08:18.735682   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:08:18.745604   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:08:18.755199   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:08:18.755261   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:08:18.765450   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.775187   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:08:18.775268   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.788080   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:08:18.800668   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:08:18.800727   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:08:18.814084   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:08:18.823785   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:18.961698   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.495418   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.713653   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.812667   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.921314   73900 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:08:19.921414   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.422349   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.922222   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.422364   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.921493   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:22.421640   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:22.922418   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.421851   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.921502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.422346   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.922000   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.422290   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.922213   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.422100   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.922239   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:27.421729   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:27.922374   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.921870   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.421786   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.921804   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.421482   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.921969   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.422241   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.922148   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:32.421504   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:32.921516   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.421576   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.922082   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.421599   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.922178   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.422199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.922061   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.421860   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.921513   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.422162   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.921497   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.422360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.922305   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.422480   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.922279   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.422089   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.922021   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.421727   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.921519   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:42.422193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:42.922495   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.422250   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.922413   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.421962   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.921682   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.422144   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.922206   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.422020   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.921960   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:47.422296   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:47.921903   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.422535   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.921484   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.421909   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.922117   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.421606   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.921728   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.421600   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.921716   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:52.421873   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:52.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.921496   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.421866   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.921995   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.421476   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.421660   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.922489   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:57.422291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:57.921737   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.922007   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.422173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.921803   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.421596   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.922123   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.422186   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.921898   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:02.421894   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:02.922329   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.421922   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.922360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.421875   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.922544   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.421939   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.921693   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.422056   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.921627   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:07.422125   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:07.921687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.421694   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.922234   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.421817   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.921704   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.422030   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.921597   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.421700   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.922301   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:12.421567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:12.922171   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.422423   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.921941   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.422494   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.922454   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.421776   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.922567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.421713   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.922449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:17.421644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:17.922098   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.922084   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.421717   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.922095   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:19.922178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:19.962975   73900 cri.go:89] found id: ""
	I0930 21:09:19.963002   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.963014   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:19.963020   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:19.963073   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:19.999741   73900 cri.go:89] found id: ""
	I0930 21:09:19.999769   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.999777   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:19.999782   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:19.999840   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:20.035818   73900 cri.go:89] found id: ""
	I0930 21:09:20.035844   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.035856   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:20.035863   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:20.035924   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:20.072005   73900 cri.go:89] found id: ""
	I0930 21:09:20.072032   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.072042   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:20.072048   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:20.072110   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:20.108229   73900 cri.go:89] found id: ""
	I0930 21:09:20.108258   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.108314   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:20.108325   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:20.108383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:20.141331   73900 cri.go:89] found id: ""
	I0930 21:09:20.141388   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.141398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:20.141406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:20.141466   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:20.175133   73900 cri.go:89] found id: ""
	I0930 21:09:20.175161   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.175169   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:20.175175   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:20.175223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:20.210529   73900 cri.go:89] found id: ""
	I0930 21:09:20.210566   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.210578   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:20.210594   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:20.210608   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:20.261055   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:20.261095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:20.274212   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:20.274239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:20.406215   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:20.406246   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:20.406282   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:20.481758   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:20.481794   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:23.019687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:23.033394   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:23.033450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:23.078558   73900 cri.go:89] found id: ""
	I0930 21:09:23.078592   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.078604   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:23.078611   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:23.078673   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:23.117833   73900 cri.go:89] found id: ""
	I0930 21:09:23.117860   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.117868   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:23.117875   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:23.117931   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:23.157299   73900 cri.go:89] found id: ""
	I0930 21:09:23.157337   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.157359   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:23.157367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:23.157438   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:23.196545   73900 cri.go:89] found id: ""
	I0930 21:09:23.196570   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.196579   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:23.196586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:23.196644   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:23.229359   73900 cri.go:89] found id: ""
	I0930 21:09:23.229390   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.229401   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:23.229409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:23.229471   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:23.264847   73900 cri.go:89] found id: ""
	I0930 21:09:23.264881   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.264893   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:23.264900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:23.264962   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:23.298657   73900 cri.go:89] found id: ""
	I0930 21:09:23.298687   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.298695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:23.298701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:23.298750   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:23.333787   73900 cri.go:89] found id: ""
	I0930 21:09:23.333816   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.333826   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:23.333836   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:23.333851   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:23.386311   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:23.386347   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:23.400096   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:23.400129   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:23.481724   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:23.481748   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:23.481780   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:23.561080   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:23.561119   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.122460   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:26.136409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:26.136495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:26.170785   73900 cri.go:89] found id: ""
	I0930 21:09:26.170818   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.170832   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:26.170866   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:26.170945   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:26.205211   73900 cri.go:89] found id: ""
	I0930 21:09:26.205265   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.205275   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:26.205281   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:26.205335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:26.239242   73900 cri.go:89] found id: ""
	I0930 21:09:26.239276   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.239285   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:26.239291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:26.239337   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:26.272908   73900 cri.go:89] found id: ""
	I0930 21:09:26.272932   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.272940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:26.272946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:26.272993   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:26.311599   73900 cri.go:89] found id: ""
	I0930 21:09:26.311625   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.311632   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:26.311639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:26.311684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:26.345719   73900 cri.go:89] found id: ""
	I0930 21:09:26.345746   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.345754   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:26.345760   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:26.345816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:26.383513   73900 cri.go:89] found id: ""
	I0930 21:09:26.383562   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.383572   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:26.383578   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:26.383637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:26.418533   73900 cri.go:89] found id: ""
	I0930 21:09:26.418565   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.418574   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:26.418584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:26.418594   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.456635   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:26.456660   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:26.507639   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:26.507686   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:26.521069   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:26.521095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:26.594745   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:26.594768   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:26.594781   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:29.180142   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:29.194730   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:29.194785   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:29.234054   73900 cri.go:89] found id: ""
	I0930 21:09:29.234094   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.234103   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:29.234109   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:29.234156   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:29.280869   73900 cri.go:89] found id: ""
	I0930 21:09:29.280896   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.280907   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:29.280914   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:29.280988   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:29.348376   73900 cri.go:89] found id: ""
	I0930 21:09:29.348406   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.348417   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:29.348424   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:29.348491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:29.404218   73900 cri.go:89] found id: ""
	I0930 21:09:29.404251   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.404261   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:29.404268   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:29.404344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:29.449029   73900 cri.go:89] found id: ""
	I0930 21:09:29.449053   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.449061   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:29.449066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:29.449127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:29.484917   73900 cri.go:89] found id: ""
	I0930 21:09:29.484939   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.484948   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:29.484954   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:29.485002   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:29.517150   73900 cri.go:89] found id: ""
	I0930 21:09:29.517177   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.517185   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:29.517191   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:29.517259   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:29.550410   73900 cri.go:89] found id: ""
	I0930 21:09:29.550443   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.550452   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:29.550461   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:29.550472   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:29.601757   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:29.601803   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:29.616266   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:29.616299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:29.686206   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:29.686228   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:29.686240   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:29.761765   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:29.761810   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:32.299199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:32.315047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:32.315125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:32.349784   73900 cri.go:89] found id: ""
	I0930 21:09:32.349810   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.349819   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:32.349824   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:32.349871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:32.385887   73900 cri.go:89] found id: ""
	I0930 21:09:32.385916   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.385927   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:32.385935   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:32.385994   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:32.421746   73900 cri.go:89] found id: ""
	I0930 21:09:32.421776   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.421789   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:32.421796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:32.421856   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:32.459361   73900 cri.go:89] found id: ""
	I0930 21:09:32.459391   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.459404   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:32.459411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:32.459470   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:32.495919   73900 cri.go:89] found id: ""
	I0930 21:09:32.495947   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.495960   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:32.495966   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:32.496025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:32.533626   73900 cri.go:89] found id: ""
	I0930 21:09:32.533652   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.533663   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:32.533670   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:32.533729   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:32.567577   73900 cri.go:89] found id: ""
	I0930 21:09:32.567610   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.567623   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:32.567630   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:32.567687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:32.604949   73900 cri.go:89] found id: ""
	I0930 21:09:32.604981   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.604991   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:32.605001   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:32.605014   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:32.656781   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:32.656822   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:32.670116   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:32.670144   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:32.736712   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:32.736736   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:32.736751   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:32.813502   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:32.813556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:35.354372   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:35.369226   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:35.369303   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:35.408374   73900 cri.go:89] found id: ""
	I0930 21:09:35.408402   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.408414   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:35.408421   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:35.408481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:35.442390   73900 cri.go:89] found id: ""
	I0930 21:09:35.442432   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.442440   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:35.442445   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:35.442524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:35.479624   73900 cri.go:89] found id: ""
	I0930 21:09:35.479651   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.479659   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:35.479664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:35.479711   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:35.518580   73900 cri.go:89] found id: ""
	I0930 21:09:35.518609   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.518617   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:35.518623   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:35.518675   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:35.553547   73900 cri.go:89] found id: ""
	I0930 21:09:35.553582   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.553590   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:35.553604   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:35.553669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:35.596444   73900 cri.go:89] found id: ""
	I0930 21:09:35.596476   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.596487   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:35.596495   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:35.596583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:35.634232   73900 cri.go:89] found id: ""
	I0930 21:09:35.634259   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.634268   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:35.634274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:35.634322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:35.669637   73900 cri.go:89] found id: ""
	I0930 21:09:35.669672   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.669683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:35.669694   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:35.669706   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:35.719433   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:35.719469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:35.733383   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:35.733415   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:35.811860   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:35.811887   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:35.811913   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:35.896206   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:35.896272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:38.435999   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:38.450091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:38.450152   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:38.489127   73900 cri.go:89] found id: ""
	I0930 21:09:38.489153   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.489161   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:38.489166   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:38.489221   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:38.520760   73900 cri.go:89] found id: ""
	I0930 21:09:38.520783   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.520792   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:38.520798   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:38.520847   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:38.556279   73900 cri.go:89] found id: ""
	I0930 21:09:38.556306   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.556315   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:38.556319   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:38.556379   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:38.590804   73900 cri.go:89] found id: ""
	I0930 21:09:38.590827   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.590834   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:38.590840   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:38.590906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:38.624765   73900 cri.go:89] found id: ""
	I0930 21:09:38.624792   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.624800   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:38.624805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:38.624857   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:38.660587   73900 cri.go:89] found id: ""
	I0930 21:09:38.660614   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.660625   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:38.660635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:38.660702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:38.693314   73900 cri.go:89] found id: ""
	I0930 21:09:38.693352   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.693362   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:38.693371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:38.693441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:38.729163   73900 cri.go:89] found id: ""
	I0930 21:09:38.729197   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.729212   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:38.729223   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:38.729235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:38.780787   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:38.780828   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:38.794983   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:38.795009   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:38.861886   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:38.861911   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:38.861926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:38.936958   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:38.936994   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:41.479891   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:41.493041   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:41.493106   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:41.528855   73900 cri.go:89] found id: ""
	I0930 21:09:41.528889   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.528900   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:41.528906   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:41.528967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:41.565193   73900 cri.go:89] found id: ""
	I0930 21:09:41.565216   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.565224   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:41.565230   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:41.565289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:41.599503   73900 cri.go:89] found id: ""
	I0930 21:09:41.599538   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.599547   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:41.599553   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:41.599611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:41.636623   73900 cri.go:89] found id: ""
	I0930 21:09:41.636651   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.636663   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:41.636671   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:41.636728   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:41.671727   73900 cri.go:89] found id: ""
	I0930 21:09:41.671753   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.671760   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:41.671765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:41.671819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:41.705499   73900 cri.go:89] found id: ""
	I0930 21:09:41.705533   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.705543   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:41.705549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:41.705602   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:41.738262   73900 cri.go:89] found id: ""
	I0930 21:09:41.738285   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.738292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:41.738297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:41.738351   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:41.774232   73900 cri.go:89] found id: ""
	I0930 21:09:41.774261   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.774269   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:41.774277   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:41.774288   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:41.826060   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:41.826093   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:41.839308   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:41.839335   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:41.908599   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:41.908626   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:41.908640   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:41.986337   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:41.986375   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:44.527015   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:44.539973   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:44.540036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:44.575985   73900 cri.go:89] found id: ""
	I0930 21:09:44.576012   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.576021   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:44.576027   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:44.576076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:44.612693   73900 cri.go:89] found id: ""
	I0930 21:09:44.612724   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.612736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:44.612743   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:44.612809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:44.646515   73900 cri.go:89] found id: ""
	I0930 21:09:44.646544   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.646555   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:44.646562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:44.646623   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:44.679980   73900 cri.go:89] found id: ""
	I0930 21:09:44.680011   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.680022   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:44.680030   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:44.680089   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:44.714078   73900 cri.go:89] found id: ""
	I0930 21:09:44.714117   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.714128   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:44.714135   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:44.714193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:44.748491   73900 cri.go:89] found id: ""
	I0930 21:09:44.748521   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.748531   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:44.748539   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:44.748618   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:44.780902   73900 cri.go:89] found id: ""
	I0930 21:09:44.780936   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.780947   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:44.780955   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:44.781013   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:44.817944   73900 cri.go:89] found id: ""
	I0930 21:09:44.817999   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.818011   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:44.818022   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:44.818038   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:44.873896   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:44.873926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:44.887829   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:44.887858   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:44.957562   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:44.957584   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:44.957598   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:45.037892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:45.037934   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:47.583013   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:47.595799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:47.595870   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:47.630348   73900 cri.go:89] found id: ""
	I0930 21:09:47.630377   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.630385   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:47.630391   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:47.630444   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:47.663416   73900 cri.go:89] found id: ""
	I0930 21:09:47.663440   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.663448   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:47.663454   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:47.663500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:47.700145   73900 cri.go:89] found id: ""
	I0930 21:09:47.700174   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.700184   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:47.700192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:47.700253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:47.732539   73900 cri.go:89] found id: ""
	I0930 21:09:47.732567   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.732577   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:47.732583   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:47.732637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:47.764470   73900 cri.go:89] found id: ""
	I0930 21:09:47.764493   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.764501   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:47.764507   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:47.764553   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:47.802365   73900 cri.go:89] found id: ""
	I0930 21:09:47.802393   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.802403   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:47.802411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:47.802468   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:47.836504   73900 cri.go:89] found id: ""
	I0930 21:09:47.836531   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.836542   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:47.836549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:47.836611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:47.870315   73900 cri.go:89] found id: ""
	I0930 21:09:47.870338   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.870351   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:47.870359   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:47.870370   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:47.919974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:47.920011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:47.934157   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:47.934190   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:48.003046   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:48.003072   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:48.003085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:48.084947   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:48.084985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:50.624791   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:50.638118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:50.638196   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:50.672448   73900 cri.go:89] found id: ""
	I0930 21:09:50.672479   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.672488   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:50.672503   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:50.672557   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:50.706057   73900 cri.go:89] found id: ""
	I0930 21:09:50.706080   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.706088   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:50.706093   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:50.706142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:50.738101   73900 cri.go:89] found id: ""
	I0930 21:09:50.738126   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.738134   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:50.738140   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:50.738207   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:50.772483   73900 cri.go:89] found id: ""
	I0930 21:09:50.772508   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.772516   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:50.772522   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:50.772581   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:50.805169   73900 cri.go:89] found id: ""
	I0930 21:09:50.805200   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.805211   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:50.805220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:50.805276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:50.842144   73900 cri.go:89] found id: ""
	I0930 21:09:50.842168   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.842176   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:50.842182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:50.842236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:50.875512   73900 cri.go:89] found id: ""
	I0930 21:09:50.875563   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.875575   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:50.875582   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:50.875643   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:50.909549   73900 cri.go:89] found id: ""
	I0930 21:09:50.909580   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.909591   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:50.909599   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:50.909610   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:50.962064   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:50.962098   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:50.976979   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:50.977012   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:51.053784   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:51.053815   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:51.053833   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:51.130939   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:51.130975   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:53.667675   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:53.680381   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:53.680449   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:53.712759   73900 cri.go:89] found id: ""
	I0930 21:09:53.712791   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.712800   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:53.712807   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:53.712871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:53.748958   73900 cri.go:89] found id: ""
	I0930 21:09:53.748990   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.749002   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:53.749009   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:53.749078   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:53.783243   73900 cri.go:89] found id: ""
	I0930 21:09:53.783272   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.783282   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:53.783289   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:53.783382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:53.823848   73900 cri.go:89] found id: ""
	I0930 21:09:53.823875   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.823883   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:53.823890   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:53.823941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:53.865607   73900 cri.go:89] found id: ""
	I0930 21:09:53.865635   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.865643   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:53.865648   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:53.865693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:53.900888   73900 cri.go:89] found id: ""
	I0930 21:09:53.900912   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.900920   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:53.900926   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:53.900985   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:53.933688   73900 cri.go:89] found id: ""
	I0930 21:09:53.933717   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.933728   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:53.933736   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:53.933798   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:53.968702   73900 cri.go:89] found id: ""
	I0930 21:09:53.968731   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.968740   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:53.968749   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:53.968760   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:54.021588   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:54.021626   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:54.036681   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:54.036719   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:54.112189   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:54.112209   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:54.112223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:54.185028   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:54.185085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:56.725146   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:56.739358   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:56.739421   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:56.779278   73900 cri.go:89] found id: ""
	I0930 21:09:56.779313   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.779322   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:56.779329   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:56.779377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:56.815972   73900 cri.go:89] found id: ""
	I0930 21:09:56.816000   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.816011   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:56.816018   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:56.816084   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:56.849425   73900 cri.go:89] found id: ""
	I0930 21:09:56.849458   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.849471   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:56.849478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:56.849542   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:56.885483   73900 cri.go:89] found id: ""
	I0930 21:09:56.885510   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.885520   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:56.885527   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:56.885586   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:56.917832   73900 cri.go:89] found id: ""
	I0930 21:09:56.917862   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.917872   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:56.917879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:56.917932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:56.951613   73900 cri.go:89] found id: ""
	I0930 21:09:56.951643   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.951654   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:56.951664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:56.951726   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:56.987577   73900 cri.go:89] found id: ""
	I0930 21:09:56.987608   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.987620   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:56.987628   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:56.987691   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:57.024871   73900 cri.go:89] found id: ""
	I0930 21:09:57.024903   73900 logs.go:276] 0 containers: []
	W0930 21:09:57.024912   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:57.024920   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:57.024935   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:57.038279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:57.038309   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:57.111955   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:57.111985   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:57.111998   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:57.193719   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:57.193755   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:57.230058   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:57.230085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:59.780762   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:59.794210   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:59.794277   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:59.828258   73900 cri.go:89] found id: ""
	I0930 21:09:59.828287   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.828298   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:59.828306   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:59.828369   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:59.868295   73900 cri.go:89] found id: ""
	I0930 21:09:59.868331   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.868353   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:59.868363   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:59.868437   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:59.900298   73900 cri.go:89] found id: ""
	I0930 21:09:59.900326   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.900337   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:59.900343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:59.900403   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:59.934081   73900 cri.go:89] found id: ""
	I0930 21:09:59.934108   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.934120   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:59.934127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:59.934183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:59.970564   73900 cri.go:89] found id: ""
	I0930 21:09:59.970592   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.970600   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:59.970605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:59.970652   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:00.006215   73900 cri.go:89] found id: ""
	I0930 21:10:00.006249   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.006259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:00.006270   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:00.006348   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:00.040106   73900 cri.go:89] found id: ""
	I0930 21:10:00.040135   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.040144   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:00.040150   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:00.040202   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:00.079310   73900 cri.go:89] found id: ""
	I0930 21:10:00.079345   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.079354   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:00.079365   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:00.079378   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:00.161243   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:00.161284   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:00.198911   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:00.198941   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:00.247697   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:00.247735   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:00.260905   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:00.260933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:00.332502   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:02.833204   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:02.846807   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:02.846893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:02.882386   73900 cri.go:89] found id: ""
	I0930 21:10:02.882420   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.882431   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:02.882439   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:02.882504   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:02.918589   73900 cri.go:89] found id: ""
	I0930 21:10:02.918617   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.918633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:02.918642   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:02.918722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:02.952758   73900 cri.go:89] found id: ""
	I0930 21:10:02.952789   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.952799   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:02.952806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:02.952871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:02.991406   73900 cri.go:89] found id: ""
	I0930 21:10:02.991439   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.991448   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:02.991454   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:02.991511   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:03.030075   73900 cri.go:89] found id: ""
	I0930 21:10:03.030104   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.030112   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:03.030121   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:03.030172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:03.063630   73900 cri.go:89] found id: ""
	I0930 21:10:03.063654   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.063662   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:03.063668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:03.063718   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:03.098607   73900 cri.go:89] found id: ""
	I0930 21:10:03.098636   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.098644   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:03.098649   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:03.098702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:03.133161   73900 cri.go:89] found id: ""
	I0930 21:10:03.133189   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.133198   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:03.133206   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:03.133217   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:03.211046   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:03.211083   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:03.252585   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:03.252615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:03.307019   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:03.307049   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:03.320781   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:03.320811   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:03.408645   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:05.909638   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:05.922674   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:05.922744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:05.955264   73900 cri.go:89] found id: ""
	I0930 21:10:05.955305   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.955318   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:05.955326   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:05.955378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:05.991055   73900 cri.go:89] found id: ""
	I0930 21:10:05.991100   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.991122   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:05.991130   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:05.991194   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:06.025725   73900 cri.go:89] found id: ""
	I0930 21:10:06.025755   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.025766   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:06.025773   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:06.025832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:06.067700   73900 cri.go:89] found id: ""
	I0930 21:10:06.067726   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.067736   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:06.067743   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:06.067801   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:06.102729   73900 cri.go:89] found id: ""
	I0930 21:10:06.102760   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.102771   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:06.102784   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:06.102845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:06.137120   73900 cri.go:89] found id: ""
	I0930 21:10:06.137148   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.137159   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:06.137164   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:06.137215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:06.169985   73900 cri.go:89] found id: ""
	I0930 21:10:06.170014   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.170023   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:06.170029   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:06.170082   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:06.206928   73900 cri.go:89] found id: ""
	I0930 21:10:06.206951   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.206959   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:06.206967   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:06.206977   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:06.258835   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:06.258870   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:06.273527   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:06.273556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:06.351335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:06.351359   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:06.351373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:06.423412   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:06.423450   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:08.968986   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:08.984075   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:08.984139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:09.016815   73900 cri.go:89] found id: ""
	I0930 21:10:09.016847   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.016858   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:09.016864   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:09.016928   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:09.051603   73900 cri.go:89] found id: ""
	I0930 21:10:09.051626   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.051633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:09.051639   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:09.051693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:09.088820   73900 cri.go:89] found id: ""
	I0930 21:10:09.088856   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.088870   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:09.088884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:09.088949   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:09.124032   73900 cri.go:89] found id: ""
	I0930 21:10:09.124064   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.124076   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:09.124083   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:09.124140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:09.177129   73900 cri.go:89] found id: ""
	I0930 21:10:09.177161   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.177172   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:09.177178   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:09.177228   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:09.211490   73900 cri.go:89] found id: ""
	I0930 21:10:09.211513   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.211521   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:09.211540   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:09.211605   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:09.252187   73900 cri.go:89] found id: ""
	I0930 21:10:09.252211   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.252221   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:09.252229   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:09.252289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:09.286970   73900 cri.go:89] found id: ""
	I0930 21:10:09.287004   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.287012   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:09.287020   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:09.287031   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:09.369387   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:09.369410   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:09.369422   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:09.450685   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:09.450733   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:09.491302   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:09.491331   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:09.540183   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:09.540219   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.054793   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:12.068635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:12.068717   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:12.103118   73900 cri.go:89] found id: ""
	I0930 21:10:12.103140   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.103149   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:12.103154   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:12.103219   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:12.137992   73900 cri.go:89] found id: ""
	I0930 21:10:12.138020   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.138031   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:12.138040   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:12.138103   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:12.175559   73900 cri.go:89] found id: ""
	I0930 21:10:12.175591   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.175609   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:12.175616   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:12.175678   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:12.209630   73900 cri.go:89] found id: ""
	I0930 21:10:12.209655   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.209666   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:12.209672   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:12.209735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:12.245844   73900 cri.go:89] found id: ""
	I0930 21:10:12.245879   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.245891   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:12.245901   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:12.245961   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:12.280385   73900 cri.go:89] found id: ""
	I0930 21:10:12.280412   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.280420   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:12.280426   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:12.280484   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:12.315424   73900 cri.go:89] found id: ""
	I0930 21:10:12.315453   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.315463   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:12.315473   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:12.315566   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:12.349223   73900 cri.go:89] found id: ""
	I0930 21:10:12.349251   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.349270   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:12.349279   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:12.349291   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.362360   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:12.362397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:12.432060   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:12.432084   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:12.432101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:12.506059   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:12.506096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:12.541319   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:12.541348   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:15.098852   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:15.111919   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:15.112001   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:15.149174   73900 cri.go:89] found id: ""
	I0930 21:10:15.149206   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.149216   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:15.149223   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:15.149286   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:15.187283   73900 cri.go:89] found id: ""
	I0930 21:10:15.187316   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.187326   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:15.187333   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:15.187392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:15.223896   73900 cri.go:89] found id: ""
	I0930 21:10:15.223922   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.223933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:15.223940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:15.224000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:15.260530   73900 cri.go:89] found id: ""
	I0930 21:10:15.260559   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.260567   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:15.260573   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:15.260634   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:15.296319   73900 cri.go:89] found id: ""
	I0930 21:10:15.296346   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.296357   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:15.296363   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:15.296425   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:15.333785   73900 cri.go:89] found id: ""
	I0930 21:10:15.333830   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.333843   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:15.333856   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:15.333932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:15.368235   73900 cri.go:89] found id: ""
	I0930 21:10:15.368268   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.368280   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:15.368288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:15.368354   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:15.408155   73900 cri.go:89] found id: ""
	I0930 21:10:15.408184   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.408192   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:15.408200   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:15.408210   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:15.462018   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:15.462058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:15.477345   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:15.477376   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:15.558398   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:15.558423   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:15.558442   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:15.662269   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:15.662311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:18.199477   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:18.213235   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:18.213320   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:18.250379   73900 cri.go:89] found id: ""
	I0930 21:10:18.250409   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.250418   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:18.250424   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:18.250515   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:18.283381   73900 cri.go:89] found id: ""
	I0930 21:10:18.283407   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.283416   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:18.283422   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:18.283482   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:18.321601   73900 cri.go:89] found id: ""
	I0930 21:10:18.321635   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.321646   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:18.321659   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:18.321720   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:18.354210   73900 cri.go:89] found id: ""
	I0930 21:10:18.354242   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.354254   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:18.354262   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:18.354330   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:18.391982   73900 cri.go:89] found id: ""
	I0930 21:10:18.392019   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.392029   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:18.392035   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:18.392150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:18.428826   73900 cri.go:89] found id: ""
	I0930 21:10:18.428851   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.428862   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:18.428870   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:18.428927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:18.465841   73900 cri.go:89] found id: ""
	I0930 21:10:18.465868   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.465878   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:18.465887   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:18.465934   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:18.502747   73900 cri.go:89] found id: ""
	I0930 21:10:18.502775   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.502783   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:18.502793   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:18.502807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:18.558025   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:18.558064   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:18.572356   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:18.572383   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:18.642994   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:18.643020   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:18.643033   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:18.722804   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:18.722845   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.262790   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:21.276427   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:21.276510   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:21.323245   73900 cri.go:89] found id: ""
	I0930 21:10:21.323274   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.323284   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:21.323291   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:21.323377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:21.381684   73900 cri.go:89] found id: ""
	I0930 21:10:21.381725   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.381736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:21.381744   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:21.381813   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:21.428818   73900 cri.go:89] found id: ""
	I0930 21:10:21.428841   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.428849   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:21.428854   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:21.428901   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:21.462906   73900 cri.go:89] found id: ""
	I0930 21:10:21.462935   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.462944   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:21.462949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:21.462995   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:21.502417   73900 cri.go:89] found id: ""
	I0930 21:10:21.502452   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.502464   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:21.502471   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:21.502535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:21.540004   73900 cri.go:89] found id: ""
	I0930 21:10:21.540037   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.540048   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:21.540056   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:21.540105   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:21.574898   73900 cri.go:89] found id: ""
	I0930 21:10:21.574929   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.574937   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:21.574942   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:21.574999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:21.609438   73900 cri.go:89] found id: ""
	I0930 21:10:21.609465   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.609473   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:21.609496   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:21.609524   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.646651   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:21.646679   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:21.702406   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:21.702451   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:21.716226   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:21.716260   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:21.790089   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:21.790115   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:21.790128   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.368291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:24.381517   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:24.381588   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:24.416535   73900 cri.go:89] found id: ""
	I0930 21:10:24.416559   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.416570   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:24.416577   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:24.416635   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:24.454444   73900 cri.go:89] found id: ""
	I0930 21:10:24.454472   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.454480   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:24.454485   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:24.454537   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:24.492334   73900 cri.go:89] found id: ""
	I0930 21:10:24.492359   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.492367   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:24.492373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:24.492419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:24.527590   73900 cri.go:89] found id: ""
	I0930 21:10:24.527622   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.527633   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:24.527642   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:24.527708   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:24.564819   73900 cri.go:89] found id: ""
	I0930 21:10:24.564844   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.564853   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:24.564858   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:24.564915   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:24.599367   73900 cri.go:89] found id: ""
	I0930 21:10:24.599390   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.599398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:24.599403   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:24.599450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:24.636738   73900 cri.go:89] found id: ""
	I0930 21:10:24.636767   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.636778   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:24.636785   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:24.636845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:24.669607   73900 cri.go:89] found id: ""
	I0930 21:10:24.669640   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.669651   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:24.669663   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:24.669680   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:24.722662   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:24.722696   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:24.736150   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:24.736179   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:24.812022   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:24.812053   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:24.812069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.891291   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:24.891330   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.430595   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:27.443990   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:27.444054   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:27.480204   73900 cri.go:89] found id: ""
	I0930 21:10:27.480230   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.480237   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:27.480243   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:27.480297   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:27.516959   73900 cri.go:89] found id: ""
	I0930 21:10:27.516982   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.516989   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:27.516995   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:27.517041   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:27.549717   73900 cri.go:89] found id: ""
	I0930 21:10:27.549745   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.549758   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:27.549769   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:27.549821   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:27.584512   73900 cri.go:89] found id: ""
	I0930 21:10:27.584539   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.584549   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:27.584560   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:27.584619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:27.623551   73900 cri.go:89] found id: ""
	I0930 21:10:27.623586   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.623603   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:27.623612   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:27.623679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:27.662453   73900 cri.go:89] found id: ""
	I0930 21:10:27.662478   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.662486   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:27.662493   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:27.662554   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:27.695665   73900 cri.go:89] found id: ""
	I0930 21:10:27.695693   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.695701   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:27.695707   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:27.695765   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:27.729090   73900 cri.go:89] found id: ""
	I0930 21:10:27.729129   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.729137   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:27.729146   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:27.729155   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:27.816186   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:27.816230   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.854451   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:27.854485   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:27.905674   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:27.905709   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:27.918889   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:27.918917   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:27.989739   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.490514   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:30.502735   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:30.502810   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:30.535874   73900 cri.go:89] found id: ""
	I0930 21:10:30.535902   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.535914   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:30.535922   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:30.535989   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:30.570603   73900 cri.go:89] found id: ""
	I0930 21:10:30.570627   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.570634   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:30.570643   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:30.570689   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:30.605225   73900 cri.go:89] found id: ""
	I0930 21:10:30.605255   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.605266   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:30.605273   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:30.605333   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:30.640810   73900 cri.go:89] found id: ""
	I0930 21:10:30.640839   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.640849   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:30.640857   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:30.640914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:30.673101   73900 cri.go:89] found id: ""
	I0930 21:10:30.673129   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.673137   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:30.673142   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:30.673189   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:30.704332   73900 cri.go:89] found id: ""
	I0930 21:10:30.704356   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.704366   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:30.704373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:30.704440   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:30.738463   73900 cri.go:89] found id: ""
	I0930 21:10:30.738494   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.738506   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:30.738516   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:30.738579   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:30.772115   73900 cri.go:89] found id: ""
	I0930 21:10:30.772153   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.772164   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:30.772175   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:30.772193   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:30.850683   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.850707   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:30.850720   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:30.930674   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:30.930718   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:30.975781   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:30.975819   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:31.030566   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:31.030613   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:33.544354   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:33.557613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:33.557692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:33.594372   73900 cri.go:89] found id: ""
	I0930 21:10:33.594394   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.594401   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:33.594406   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:33.594455   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:33.632026   73900 cri.go:89] found id: ""
	I0930 21:10:33.632048   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.632056   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:33.632061   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:33.632113   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:33.666168   73900 cri.go:89] found id: ""
	I0930 21:10:33.666201   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.666213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:33.666219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:33.666269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:33.697772   73900 cri.go:89] found id: ""
	I0930 21:10:33.697801   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.697810   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:33.697816   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:33.697864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:33.732821   73900 cri.go:89] found id: ""
	I0930 21:10:33.732851   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.732862   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:33.732869   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:33.732952   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:33.770646   73900 cri.go:89] found id: ""
	I0930 21:10:33.770682   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.770693   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:33.770701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:33.770756   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:33.804803   73900 cri.go:89] found id: ""
	I0930 21:10:33.804831   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.804842   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:33.804848   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:33.804921   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:33.838455   73900 cri.go:89] found id: ""
	I0930 21:10:33.838484   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.838495   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:33.838505   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:33.838523   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:33.879785   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:33.879812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:33.934586   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:33.934623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:33.948250   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:33.948293   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:34.023021   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:34.023054   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:34.023069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:36.604173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:36.616668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:36.616735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:36.650716   73900 cri.go:89] found id: ""
	I0930 21:10:36.650748   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.650757   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:36.650767   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:36.650833   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:36.685705   73900 cri.go:89] found id: ""
	I0930 21:10:36.685739   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.685751   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:36.685758   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:36.685819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:36.719895   73900 cri.go:89] found id: ""
	I0930 21:10:36.719922   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.719932   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:36.719939   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:36.720006   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:36.753123   73900 cri.go:89] found id: ""
	I0930 21:10:36.753148   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.753159   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:36.753166   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:36.753231   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:36.790023   73900 cri.go:89] found id: ""
	I0930 21:10:36.790054   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.790066   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:36.790073   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:36.790135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:36.825280   73900 cri.go:89] found id: ""
	I0930 21:10:36.825314   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.825324   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:36.825343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:36.825411   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:36.859028   73900 cri.go:89] found id: ""
	I0930 21:10:36.859053   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.859060   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:36.859066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:36.859125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:36.894952   73900 cri.go:89] found id: ""
	I0930 21:10:36.894980   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.894988   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:36.894996   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:36.895010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:36.968214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:36.968241   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:36.968256   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:37.047866   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:37.047903   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:37.088671   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:37.088705   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:37.144014   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:37.144058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:39.657874   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:39.671042   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:39.671100   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:39.706210   73900 cri.go:89] found id: ""
	I0930 21:10:39.706235   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.706243   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:39.706248   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:39.706295   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:39.743194   73900 cri.go:89] found id: ""
	I0930 21:10:39.743218   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.743226   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:39.743232   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:39.743280   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:39.780681   73900 cri.go:89] found id: ""
	I0930 21:10:39.780707   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.780715   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:39.780720   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:39.780774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:39.815841   73900 cri.go:89] found id: ""
	I0930 21:10:39.815865   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.815874   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:39.815879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:39.815933   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:39.849497   73900 cri.go:89] found id: ""
	I0930 21:10:39.849523   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.849534   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:39.849541   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:39.849603   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:39.883476   73900 cri.go:89] found id: ""
	I0930 21:10:39.883507   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.883519   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:39.883562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:39.883633   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:39.918300   73900 cri.go:89] found id: ""
	I0930 21:10:39.918329   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.918338   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:39.918343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:39.918392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:39.955751   73900 cri.go:89] found id: ""
	I0930 21:10:39.955780   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.955788   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:39.955795   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:39.955807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:40.010994   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:40.011035   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:40.025992   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:40.026022   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:40.097709   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:40.097731   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:40.097748   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:40.176790   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:40.176824   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:42.713838   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:42.729806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:42.729885   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:42.765449   73900 cri.go:89] found id: ""
	I0930 21:10:42.765483   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.765491   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:42.765498   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:42.765555   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:42.802556   73900 cri.go:89] found id: ""
	I0930 21:10:42.802584   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.802604   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:42.802612   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:42.802693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:42.836537   73900 cri.go:89] found id: ""
	I0930 21:10:42.836568   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.836585   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:42.836598   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:42.836662   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:42.870475   73900 cri.go:89] found id: ""
	I0930 21:10:42.870503   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.870511   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:42.870526   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:42.870589   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:42.907061   73900 cri.go:89] found id: ""
	I0930 21:10:42.907090   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.907098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:42.907103   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:42.907153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:42.941607   73900 cri.go:89] found id: ""
	I0930 21:10:42.941632   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.941640   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:42.941646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:42.941701   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:42.977073   73900 cri.go:89] found id: ""
	I0930 21:10:42.977097   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.977105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:42.977111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:42.977159   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:43.010838   73900 cri.go:89] found id: ""
	I0930 21:10:43.010859   73900 logs.go:276] 0 containers: []
	W0930 21:10:43.010867   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:43.010875   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:43.010886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:43.061264   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:43.061299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:43.075917   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:43.075950   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:43.137088   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:43.137111   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:43.137126   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:43.219393   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:43.219440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:45.761752   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:45.775864   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:45.775942   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:45.810693   73900 cri.go:89] found id: ""
	I0930 21:10:45.810724   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.810734   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:45.810740   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:45.810797   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:45.848360   73900 cri.go:89] found id: ""
	I0930 21:10:45.848399   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.848410   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:45.848418   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:45.848475   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:45.885504   73900 cri.go:89] found id: ""
	I0930 21:10:45.885550   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.885560   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:45.885565   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:45.885616   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:45.919747   73900 cri.go:89] found id: ""
	I0930 21:10:45.919776   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.919784   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:45.919789   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:45.919843   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:45.953787   73900 cri.go:89] found id: ""
	I0930 21:10:45.953820   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.953831   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:45.953839   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:45.953893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:45.990145   73900 cri.go:89] found id: ""
	I0930 21:10:45.990174   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.990184   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:45.990192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:45.990253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:46.023359   73900 cri.go:89] found id: ""
	I0930 21:10:46.023383   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.023391   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:46.023396   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:46.023447   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:46.057460   73900 cri.go:89] found id: ""
	I0930 21:10:46.057493   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.057504   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:46.057514   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:46.057533   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:46.097082   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:46.097109   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:46.147921   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:46.147960   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:46.161204   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:46.161232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:46.224308   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:46.224336   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:46.224351   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:48.805668   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:48.818569   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:48.818663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:48.856783   73900 cri.go:89] found id: ""
	I0930 21:10:48.856815   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.856827   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:48.856834   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:48.856896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:48.889185   73900 cri.go:89] found id: ""
	I0930 21:10:48.889217   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.889229   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:48.889236   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:48.889306   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:48.922013   73900 cri.go:89] found id: ""
	I0930 21:10:48.922041   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.922050   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:48.922055   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:48.922107   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:48.956818   73900 cri.go:89] found id: ""
	I0930 21:10:48.956848   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.956858   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:48.956866   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:48.956929   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:48.994942   73900 cri.go:89] found id: ""
	I0930 21:10:48.994975   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.994985   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:48.994991   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:48.995052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:49.031448   73900 cri.go:89] found id: ""
	I0930 21:10:49.031479   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.031491   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:49.031500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:49.031583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:49.066570   73900 cri.go:89] found id: ""
	I0930 21:10:49.066600   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.066608   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:49.066613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:49.066658   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:49.100952   73900 cri.go:89] found id: ""
	I0930 21:10:49.100981   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.100992   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:49.101000   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:49.101010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:49.176423   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:49.176458   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:49.212358   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:49.212387   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:49.263177   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:49.263227   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:49.275940   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:49.275969   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:49.346915   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:51.847761   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:51.860571   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:51.860646   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:51.894863   73900 cri.go:89] found id: ""
	I0930 21:10:51.894896   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.894906   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:51.894914   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:51.894978   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:51.927977   73900 cri.go:89] found id: ""
	I0930 21:10:51.928007   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.928018   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:51.928025   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:51.928083   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:51.962894   73900 cri.go:89] found id: ""
	I0930 21:10:51.962924   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.962933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:51.962940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:51.962999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:51.998453   73900 cri.go:89] found id: ""
	I0930 21:10:51.998482   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.998493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:51.998500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:51.998562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:52.033039   73900 cri.go:89] found id: ""
	I0930 21:10:52.033066   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.033075   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:52.033080   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:52.033139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:52.067222   73900 cri.go:89] found id: ""
	I0930 21:10:52.067254   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.067267   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:52.067274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:52.067341   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:52.102414   73900 cri.go:89] found id: ""
	I0930 21:10:52.102439   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.102448   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:52.102453   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:52.102498   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:52.135175   73900 cri.go:89] found id: ""
	I0930 21:10:52.135204   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.135214   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:52.135225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:52.135239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:52.185736   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:52.185779   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:52.198756   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:52.198792   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:52.264816   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:52.264847   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:52.264859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:52.347189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:52.347229   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:54.887502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:54.900067   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:54.900153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:54.939214   73900 cri.go:89] found id: ""
	I0930 21:10:54.939241   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.939249   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:54.939259   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:54.939313   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:54.973451   73900 cri.go:89] found id: ""
	I0930 21:10:54.973475   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.973483   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:54.973488   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:54.973541   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:55.007815   73900 cri.go:89] found id: ""
	I0930 21:10:55.007841   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.007850   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:55.007855   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:55.007914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:55.040861   73900 cri.go:89] found id: ""
	I0930 21:10:55.040891   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.040899   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:55.040905   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:55.040957   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:55.076053   73900 cri.go:89] found id: ""
	I0930 21:10:55.076086   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.076098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:55.076111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:55.076172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:55.108768   73900 cri.go:89] found id: ""
	I0930 21:10:55.108797   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.108807   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:55.108814   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:55.108879   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:55.155283   73900 cri.go:89] found id: ""
	I0930 21:10:55.155316   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.155331   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:55.155338   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:55.155398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:55.189370   73900 cri.go:89] found id: ""
	I0930 21:10:55.189399   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.189408   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:55.189416   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:55.189432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:55.243067   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:55.243101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:55.257021   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:55.257051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:55.329381   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:55.329408   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:55.329423   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:55.405691   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:55.405762   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:57.957380   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:57.971160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:57.971245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:58.004401   73900 cri.go:89] found id: ""
	I0930 21:10:58.004446   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.004457   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:58.004465   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:58.004524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:58.038954   73900 cri.go:89] found id: ""
	I0930 21:10:58.038978   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.038986   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:58.038991   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:58.039036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:58.072801   73900 cri.go:89] found id: ""
	I0930 21:10:58.072830   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.072842   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:58.072849   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:58.072909   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:58.104908   73900 cri.go:89] found id: ""
	I0930 21:10:58.104936   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.104946   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:58.104953   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:58.105014   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:58.139693   73900 cri.go:89] found id: ""
	I0930 21:10:58.139725   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.139735   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:58.139741   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:58.139795   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:58.174149   73900 cri.go:89] found id: ""
	I0930 21:10:58.174180   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.174192   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:58.174199   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:58.174275   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:58.206067   73900 cri.go:89] found id: ""
	I0930 21:10:58.206094   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.206105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:58.206112   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:58.206167   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:58.240613   73900 cri.go:89] found id: ""
	I0930 21:10:58.240645   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.240653   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:58.240661   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:58.240674   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:58.306061   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:58.306086   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:58.306100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:58.386030   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:58.386073   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:58.425526   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:58.425562   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:58.483364   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:58.483409   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:00.998086   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:01.011934   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:01.012015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:01.047923   73900 cri.go:89] found id: ""
	I0930 21:11:01.047951   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.047960   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:01.047966   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:01.048024   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:01.082126   73900 cri.go:89] found id: ""
	I0930 21:11:01.082159   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.082170   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:01.082176   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:01.082224   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:01.117746   73900 cri.go:89] found id: ""
	I0930 21:11:01.117775   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.117787   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:01.117794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:01.117853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:01.153034   73900 cri.go:89] found id: ""
	I0930 21:11:01.153059   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.153067   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:01.153072   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:01.153128   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:01.188102   73900 cri.go:89] found id: ""
	I0930 21:11:01.188125   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.188133   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:01.188139   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:01.188193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:01.222120   73900 cri.go:89] found id: ""
	I0930 21:11:01.222147   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.222155   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:01.222161   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:01.222215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:01.258899   73900 cri.go:89] found id: ""
	I0930 21:11:01.258929   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.258941   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:01.258949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:01.259008   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:01.295473   73900 cri.go:89] found id: ""
	I0930 21:11:01.295504   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.295512   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:01.295521   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:01.295551   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:01.349134   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:01.349181   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:01.363113   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:01.363147   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:01.436589   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:01.436609   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:01.436622   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:01.516384   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:01.516420   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.075114   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:04.089300   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:04.089375   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:04.124385   73900 cri.go:89] found id: ""
	I0930 21:11:04.124411   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.124419   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:04.124425   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:04.124491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:04.158326   73900 cri.go:89] found id: ""
	I0930 21:11:04.158359   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.158367   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:04.158372   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:04.158419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:04.193477   73900 cri.go:89] found id: ""
	I0930 21:11:04.193507   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.193516   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:04.193521   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:04.193577   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:04.231697   73900 cri.go:89] found id: ""
	I0930 21:11:04.231723   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.231731   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:04.231737   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:04.231805   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:04.265879   73900 cri.go:89] found id: ""
	I0930 21:11:04.265903   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.265910   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:04.265915   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:04.265960   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:04.301382   73900 cri.go:89] found id: ""
	I0930 21:11:04.301421   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.301432   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:04.301440   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:04.301505   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:04.337496   73900 cri.go:89] found id: ""
	I0930 21:11:04.337521   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.337529   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:04.337534   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:04.337584   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:04.372631   73900 cri.go:89] found id: ""
	I0930 21:11:04.372665   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.372677   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:04.372700   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:04.372715   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:04.385279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:04.385311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:04.456700   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:04.456721   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:04.456732   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:04.537892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:04.537933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.574919   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:04.574947   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.128733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:07.142625   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:07.142687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:07.177450   73900 cri.go:89] found id: ""
	I0930 21:11:07.177475   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.177483   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:07.177488   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:07.177536   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:07.210158   73900 cri.go:89] found id: ""
	I0930 21:11:07.210184   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.210192   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:07.210197   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:07.210256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:07.242623   73900 cri.go:89] found id: ""
	I0930 21:11:07.242648   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.242656   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:07.242661   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:07.242705   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:07.277779   73900 cri.go:89] found id: ""
	I0930 21:11:07.277810   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.277821   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:07.277827   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:07.277881   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:07.316232   73900 cri.go:89] found id: ""
	I0930 21:11:07.316257   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.316263   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:07.316269   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:07.316326   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:07.360277   73900 cri.go:89] found id: ""
	I0930 21:11:07.360311   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.360322   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:07.360329   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:07.360391   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:07.412146   73900 cri.go:89] found id: ""
	I0930 21:11:07.412171   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.412181   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:07.412187   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:07.412247   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:07.447179   73900 cri.go:89] found id: ""
	I0930 21:11:07.447209   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.447217   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:07.447225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:07.447235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.496304   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:07.496340   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:07.510332   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:07.510373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:07.581335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:07.581375   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:07.581393   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:07.664522   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:07.664558   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:10.201145   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:10.213605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:10.213663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:10.247875   73900 cri.go:89] found id: ""
	I0930 21:11:10.247904   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.247913   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:10.247918   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:10.247966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:10.280855   73900 cri.go:89] found id: ""
	I0930 21:11:10.280889   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.280900   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:10.280907   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:10.280967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:10.315638   73900 cri.go:89] found id: ""
	I0930 21:11:10.315661   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.315669   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:10.315675   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:10.315722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:10.357059   73900 cri.go:89] found id: ""
	I0930 21:11:10.357086   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.357094   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:10.357100   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:10.357154   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:10.389969   73900 cri.go:89] found id: ""
	I0930 21:11:10.389997   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.390004   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:10.390009   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:10.390060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:10.424424   73900 cri.go:89] found id: ""
	I0930 21:11:10.424454   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.424463   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:10.424469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:10.424533   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:10.457608   73900 cri.go:89] found id: ""
	I0930 21:11:10.457638   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.457650   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:10.457657   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:10.457712   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:10.490215   73900 cri.go:89] found id: ""
	I0930 21:11:10.490244   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.490253   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:10.490263   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:10.490278   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:10.554787   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:10.554814   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:10.554829   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:10.632428   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:10.632464   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:10.671018   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:10.671054   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:10.721187   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:10.721228   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:13.234687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:13.250680   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:13.250778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:13.312468   73900 cri.go:89] found id: ""
	I0930 21:11:13.312499   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.312509   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:13.312516   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:13.312578   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:13.367051   73900 cri.go:89] found id: ""
	I0930 21:11:13.367073   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.367084   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:13.367091   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:13.367149   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:13.403019   73900 cri.go:89] found id: ""
	I0930 21:11:13.403055   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.403066   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:13.403074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:13.403135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:13.436942   73900 cri.go:89] found id: ""
	I0930 21:11:13.436967   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.436975   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:13.436981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:13.437047   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:13.470491   73900 cri.go:89] found id: ""
	I0930 21:11:13.470515   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.470523   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:13.470528   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:13.470619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:13.504078   73900 cri.go:89] found id: ""
	I0930 21:11:13.504112   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.504121   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:13.504127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:13.504201   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:13.536245   73900 cri.go:89] found id: ""
	I0930 21:11:13.536271   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.536292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:13.536297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:13.536357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:13.570794   73900 cri.go:89] found id: ""
	I0930 21:11:13.570817   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.570827   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:13.570836   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:13.570850   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:13.647919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:13.647941   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:13.647956   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:13.726113   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:13.726150   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:13.767916   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:13.767942   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:13.826362   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:13.826402   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.341252   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:16.354259   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:16.354344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:16.388627   73900 cri.go:89] found id: ""
	I0930 21:11:16.388650   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.388658   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:16.388663   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:16.388714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:16.424848   73900 cri.go:89] found id: ""
	I0930 21:11:16.424871   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.424878   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:16.424883   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:16.424941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:16.460604   73900 cri.go:89] found id: ""
	I0930 21:11:16.460626   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.460635   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:16.460640   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:16.460688   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:16.495908   73900 cri.go:89] found id: ""
	I0930 21:11:16.495932   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.495940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:16.495946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:16.496000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:16.531758   73900 cri.go:89] found id: ""
	I0930 21:11:16.531782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.531790   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:16.531796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:16.531853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:16.566756   73900 cri.go:89] found id: ""
	I0930 21:11:16.566782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.566792   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:16.566799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:16.566864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:16.601978   73900 cri.go:89] found id: ""
	I0930 21:11:16.602005   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.602012   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:16.602022   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:16.602081   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:16.636009   73900 cri.go:89] found id: ""
	I0930 21:11:16.636044   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.636056   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:16.636066   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:16.636079   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:16.688750   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:16.688786   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.702364   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:16.702404   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:16.767119   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:16.767175   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:16.767188   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:16.842052   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:16.842095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:19.380570   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:19.394687   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:19.394816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:19.427087   73900 cri.go:89] found id: ""
	I0930 21:11:19.427116   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.427124   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:19.427129   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:19.427178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:19.461074   73900 cri.go:89] found id: ""
	I0930 21:11:19.461098   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.461108   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:19.461122   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:19.461183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:19.494850   73900 cri.go:89] found id: ""
	I0930 21:11:19.494872   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.494880   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:19.494885   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:19.494943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:19.533448   73900 cri.go:89] found id: ""
	I0930 21:11:19.533480   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.533493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:19.533500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:19.533562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:19.569250   73900 cri.go:89] found id: ""
	I0930 21:11:19.569280   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.569291   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:19.569298   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:19.569383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:19.603182   73900 cri.go:89] found id: ""
	I0930 21:11:19.603206   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.603213   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:19.603219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:19.603268   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:19.637411   73900 cri.go:89] found id: ""
	I0930 21:11:19.637433   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.637441   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:19.637447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:19.637500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:19.672789   73900 cri.go:89] found id: ""
	I0930 21:11:19.672821   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.672831   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:19.672841   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:19.672854   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:19.755002   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:19.755039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:19.796499   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:19.796536   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:19.847235   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:19.847272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:19.861007   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:19.861032   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:19.931214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.431506   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:22.446129   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:22.446199   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:22.484093   73900 cri.go:89] found id: ""
	I0930 21:11:22.484119   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.484126   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:22.484132   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:22.484183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:22.516949   73900 cri.go:89] found id: ""
	I0930 21:11:22.516986   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.516994   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:22.517001   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:22.517056   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:22.550848   73900 cri.go:89] found id: ""
	I0930 21:11:22.550883   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.550898   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:22.550906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:22.550966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:22.586459   73900 cri.go:89] found id: ""
	I0930 21:11:22.586490   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.586498   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:22.586505   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:22.586627   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:22.620538   73900 cri.go:89] found id: ""
	I0930 21:11:22.620566   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.620578   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:22.620586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:22.620651   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:22.658256   73900 cri.go:89] found id: ""
	I0930 21:11:22.658279   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.658287   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:22.658292   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:22.658352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:22.690316   73900 cri.go:89] found id: ""
	I0930 21:11:22.690349   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.690365   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:22.690371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:22.690431   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:22.724234   73900 cri.go:89] found id: ""
	I0930 21:11:22.724264   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.724275   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:22.724285   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:22.724299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:22.777460   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:22.777503   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:22.790850   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:22.790879   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:22.866058   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.866079   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:22.866095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:22.947447   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:22.947488   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:25.486733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:25.499906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:25.499976   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:25.533819   73900 cri.go:89] found id: ""
	I0930 21:11:25.533842   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.533850   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:25.533857   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:25.533906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:25.568037   73900 cri.go:89] found id: ""
	I0930 21:11:25.568059   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.568066   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:25.568071   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:25.568129   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:25.601784   73900 cri.go:89] found id: ""
	I0930 21:11:25.601811   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.601819   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:25.601824   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:25.601876   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:25.638048   73900 cri.go:89] found id: ""
	I0930 21:11:25.638070   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.638078   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:25.638084   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:25.638140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:25.669946   73900 cri.go:89] found id: ""
	I0930 21:11:25.669968   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.669976   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:25.669981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:25.670028   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:25.701928   73900 cri.go:89] found id: ""
	I0930 21:11:25.701953   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.701961   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:25.701967   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:25.702025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:25.744295   73900 cri.go:89] found id: ""
	I0930 21:11:25.744327   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.744335   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:25.744341   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:25.744398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:25.780175   73900 cri.go:89] found id: ""
	I0930 21:11:25.780205   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.780213   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:25.780221   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:25.780232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:25.828774   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:25.828812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:25.842624   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:25.842649   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:25.916408   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:25.916451   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:25.916469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:25.997896   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:25.997932   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:28.540994   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:28.553841   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:28.553904   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:28.588718   73900 cri.go:89] found id: ""
	I0930 21:11:28.588745   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.588754   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:28.588763   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:28.588809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:28.636210   73900 cri.go:89] found id: ""
	I0930 21:11:28.636237   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.636245   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:28.636250   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:28.636312   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:28.668714   73900 cri.go:89] found id: ""
	I0930 21:11:28.668743   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.668751   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:28.668757   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:28.668804   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:28.700413   73900 cri.go:89] found id: ""
	I0930 21:11:28.700449   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.700462   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:28.700469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:28.700522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:28.733409   73900 cri.go:89] found id: ""
	I0930 21:11:28.733433   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.733441   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:28.733446   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:28.733494   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:28.766917   73900 cri.go:89] found id: ""
	I0930 21:11:28.766957   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.766970   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:28.766979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:28.767046   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:28.801759   73900 cri.go:89] found id: ""
	I0930 21:11:28.801788   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.801798   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:28.801805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:28.801851   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:28.840724   73900 cri.go:89] found id: ""
	I0930 21:11:28.840761   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.840770   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:28.840790   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:28.840805   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:28.854426   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:28.854465   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:28.926650   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:28.926675   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:28.926690   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:29.005513   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:29.005569   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:29.047077   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:29.047102   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.603193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:31.615563   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:31.615631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:31.647656   73900 cri.go:89] found id: ""
	I0930 21:11:31.647685   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.647693   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:31.647699   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:31.647748   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:31.680004   73900 cri.go:89] found id: ""
	I0930 21:11:31.680037   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.680048   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:31.680056   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:31.680120   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:31.712562   73900 cri.go:89] found id: ""
	I0930 21:11:31.712588   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.712596   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:31.712602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:31.712650   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:31.747692   73900 cri.go:89] found id: ""
	I0930 21:11:31.747724   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.747732   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:31.747738   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:31.747803   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:31.781441   73900 cri.go:89] found id: ""
	I0930 21:11:31.781464   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.781472   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:31.781478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:31.781532   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:31.822227   73900 cri.go:89] found id: ""
	I0930 21:11:31.822252   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.822259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:31.822265   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:31.822322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:31.856531   73900 cri.go:89] found id: ""
	I0930 21:11:31.856555   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.856563   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:31.856568   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:31.856631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:31.894562   73900 cri.go:89] found id: ""
	I0930 21:11:31.894585   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.894593   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:31.894602   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:31.894618   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.946233   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:31.946271   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:31.960713   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:31.960744   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:32.036479   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:32.036497   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:32.036509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:32.111442   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:32.111477   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:34.651545   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:34.664058   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:34.664121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:34.697506   73900 cri.go:89] found id: ""
	I0930 21:11:34.697530   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.697539   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:34.697545   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:34.697599   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:34.730297   73900 cri.go:89] found id: ""
	I0930 21:11:34.730326   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.730334   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:34.730339   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:34.730390   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:34.762251   73900 cri.go:89] found id: ""
	I0930 21:11:34.762278   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.762286   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:34.762291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:34.762358   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:34.803028   73900 cri.go:89] found id: ""
	I0930 21:11:34.803058   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.803068   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:34.803074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:34.803122   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:34.840063   73900 cri.go:89] found id: ""
	I0930 21:11:34.840097   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.840110   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:34.840118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:34.840192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:34.878641   73900 cri.go:89] found id: ""
	I0930 21:11:34.878675   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.878686   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:34.878693   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:34.878745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:34.910799   73900 cri.go:89] found id: ""
	I0930 21:11:34.910823   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.910830   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:34.910837   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:34.910899   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:34.947748   73900 cri.go:89] found id: ""
	I0930 21:11:34.947782   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.947795   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:34.947806   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:34.947821   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:35.026490   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:35.026514   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:35.026529   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:35.115504   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:35.115559   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:35.158629   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:35.158659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:35.211011   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:35.211052   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:37.726260   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:37.739137   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:37.739222   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:37.779980   73900 cri.go:89] found id: ""
	I0930 21:11:37.780009   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.780018   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:37.780024   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:37.780076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:37.813936   73900 cri.go:89] found id: ""
	I0930 21:11:37.813961   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.813969   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:37.813975   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:37.814021   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:37.851150   73900 cri.go:89] found id: ""
	I0930 21:11:37.851176   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.851186   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:37.851193   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:37.851256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:37.891855   73900 cri.go:89] found id: ""
	I0930 21:11:37.891881   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.891889   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:37.891894   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:37.891943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:37.929234   73900 cri.go:89] found id: ""
	I0930 21:11:37.929269   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.929281   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:37.929288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:37.929359   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:37.962350   73900 cri.go:89] found id: ""
	I0930 21:11:37.962378   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.962386   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:37.962391   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:37.962441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:37.996727   73900 cri.go:89] found id: ""
	I0930 21:11:37.996752   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.996760   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:37.996765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:37.996819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:38.029959   73900 cri.go:89] found id: ""
	I0930 21:11:38.029991   73900 logs.go:276] 0 containers: []
	W0930 21:11:38.029999   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:38.030008   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:38.030019   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:38.079836   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:38.079875   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:38.093208   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:38.093236   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:38.168839   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:38.168862   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:38.168873   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:38.244747   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:38.244783   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:40.788841   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:40.802419   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:40.802491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:40.837138   73900 cri.go:89] found id: ""
	I0930 21:11:40.837175   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.837186   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:40.837193   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:40.837255   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:40.870947   73900 cri.go:89] found id: ""
	I0930 21:11:40.870977   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.870987   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:40.870993   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:40.871040   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:40.905004   73900 cri.go:89] found id: ""
	I0930 21:11:40.905033   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.905046   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:40.905053   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:40.905104   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:40.936909   73900 cri.go:89] found id: ""
	I0930 21:11:40.936937   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.936945   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:40.936952   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:40.937015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:40.972601   73900 cri.go:89] found id: ""
	I0930 21:11:40.972630   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.972641   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:40.972646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:40.972704   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:41.007539   73900 cri.go:89] found id: ""
	I0930 21:11:41.007583   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.007594   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:41.007602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:41.007661   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:41.042049   73900 cri.go:89] found id: ""
	I0930 21:11:41.042075   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.042084   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:41.042091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:41.042153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:41.075313   73900 cri.go:89] found id: ""
	I0930 21:11:41.075398   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.075414   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:41.075424   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:41.075440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:41.128683   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:41.128726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:41.142533   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:41.142560   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:41.210149   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:41.210176   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:41.210191   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:41.286547   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:41.286590   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:43.828902   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:43.842047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:43.842127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:43.876147   73900 cri.go:89] found id: ""
	I0930 21:11:43.876177   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.876187   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:43.876194   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:43.876287   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:43.916351   73900 cri.go:89] found id: ""
	I0930 21:11:43.916383   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.916394   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:43.916404   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:43.916457   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:43.948853   73900 cri.go:89] found id: ""
	I0930 21:11:43.948883   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.948894   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:43.948900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:43.948967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:43.983525   73900 cri.go:89] found id: ""
	I0930 21:11:43.983577   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.983589   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:43.983597   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:43.983656   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:44.021560   73900 cri.go:89] found id: ""
	I0930 21:11:44.021594   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.021606   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:44.021614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:44.021684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:44.057307   73900 cri.go:89] found id: ""
	I0930 21:11:44.057342   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.057353   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:44.057361   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:44.057418   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:44.091120   73900 cri.go:89] found id: ""
	I0930 21:11:44.091145   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.091155   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:44.091162   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:44.091223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:44.125781   73900 cri.go:89] found id: ""
	I0930 21:11:44.125808   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.125817   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:44.125827   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:44.125842   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:44.138699   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:44.138726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:44.208976   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:44.209009   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:44.209026   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:44.285552   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:44.285593   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:44.323412   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:44.323449   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:46.875210   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:46.888532   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:46.888596   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:46.921260   73900 cri.go:89] found id: ""
	I0930 21:11:46.921285   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.921293   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:46.921299   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:46.921357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:46.954645   73900 cri.go:89] found id: ""
	I0930 21:11:46.954675   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.954683   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:46.954688   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:46.954749   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:46.988424   73900 cri.go:89] found id: ""
	I0930 21:11:46.988457   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.988468   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:46.988475   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:46.988535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:47.022635   73900 cri.go:89] found id: ""
	I0930 21:11:47.022664   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.022675   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:47.022682   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:47.022744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:47.056497   73900 cri.go:89] found id: ""
	I0930 21:11:47.056523   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.056530   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:47.056536   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:47.056595   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:47.094983   73900 cri.go:89] found id: ""
	I0930 21:11:47.095011   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.095021   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:47.095028   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:47.095097   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:47.147567   73900 cri.go:89] found id: ""
	I0930 21:11:47.147595   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.147606   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:47.147613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:47.147692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:47.184878   73900 cri.go:89] found id: ""
	I0930 21:11:47.184908   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.184919   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:47.184930   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:47.184943   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:47.258581   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:47.258615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:47.303068   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:47.303100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:47.358749   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:47.358789   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:47.372492   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:47.372531   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:47.443984   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:49.944644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:49.958045   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:49.958124   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:49.993053   73900 cri.go:89] found id: ""
	I0930 21:11:49.993088   73900 logs.go:276] 0 containers: []
	W0930 21:11:49.993100   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:49.993107   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:49.993168   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:50.026171   73900 cri.go:89] found id: ""
	I0930 21:11:50.026197   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.026205   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:50.026210   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:50.026269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:50.060462   73900 cri.go:89] found id: ""
	I0930 21:11:50.060492   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.060502   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:50.060509   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:50.060567   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:50.095385   73900 cri.go:89] found id: ""
	I0930 21:11:50.095414   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.095425   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:50.095432   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:50.095507   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:50.127275   73900 cri.go:89] found id: ""
	I0930 21:11:50.127300   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.127308   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:50.127318   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:50.127378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:50.159810   73900 cri.go:89] found id: ""
	I0930 21:11:50.159836   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.159845   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:50.159850   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:50.159906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:50.191651   73900 cri.go:89] found id: ""
	I0930 21:11:50.191684   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.191695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:50.191702   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:50.191774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:50.225772   73900 cri.go:89] found id: ""
	I0930 21:11:50.225799   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.225809   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:50.225819   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:50.225837   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:50.310189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:50.310223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:50.348934   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:50.348965   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:50.400666   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:50.400703   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:50.415810   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:50.415843   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:50.483773   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:52.984701   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:52.997669   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:52.997745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:53.034012   73900 cri.go:89] found id: ""
	I0930 21:11:53.034044   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.034055   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:53.034063   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:53.034121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:53.068192   73900 cri.go:89] found id: ""
	I0930 21:11:53.068215   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.068222   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:53.068228   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:53.068285   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:53.104683   73900 cri.go:89] found id: ""
	I0930 21:11:53.104710   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.104719   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:53.104724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:53.104778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:53.138713   73900 cri.go:89] found id: ""
	I0930 21:11:53.138745   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.138753   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:53.138759   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:53.138814   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:53.173955   73900 cri.go:89] found id: ""
	I0930 21:11:53.173982   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.173994   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:53.174001   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:53.174060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:53.205942   73900 cri.go:89] found id: ""
	I0930 21:11:53.205970   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.205980   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:53.205987   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:53.206052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:53.241739   73900 cri.go:89] found id: ""
	I0930 21:11:53.241767   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.241776   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:53.241782   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:53.241832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:53.275328   73900 cri.go:89] found id: ""
	I0930 21:11:53.275363   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.275372   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:53.275381   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:53.275397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:53.313732   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:53.313761   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:53.364974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:53.365011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:53.377970   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:53.377999   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:53.445341   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:53.445370   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:53.445388   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.025958   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:56.038367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:56.038434   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:56.074721   73900 cri.go:89] found id: ""
	I0930 21:11:56.074756   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.074767   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:56.074781   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:56.074846   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:56.111491   73900 cri.go:89] found id: ""
	I0930 21:11:56.111525   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.111550   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:56.111572   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:56.111626   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:56.145660   73900 cri.go:89] found id: ""
	I0930 21:11:56.145690   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.145701   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:56.145708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:56.145769   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:56.180865   73900 cri.go:89] found id: ""
	I0930 21:11:56.180891   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.180901   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:56.180908   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:56.180971   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:56.213681   73900 cri.go:89] found id: ""
	I0930 21:11:56.213707   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.213716   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:56.213721   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:56.213772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:56.246683   73900 cri.go:89] found id: ""
	I0930 21:11:56.246711   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.246719   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:56.246724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:56.246774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:56.279651   73900 cri.go:89] found id: ""
	I0930 21:11:56.279679   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.279687   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:56.279692   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:56.279746   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:56.316701   73900 cri.go:89] found id: ""
	I0930 21:11:56.316727   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.316735   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:56.316743   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:56.316753   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:56.329879   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:56.329905   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:56.399919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:56.399949   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:56.399964   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.480200   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:56.480237   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:56.517755   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:56.517782   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:59.070677   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:59.085884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:59.085956   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:59.119580   73900 cri.go:89] found id: ""
	I0930 21:11:59.119606   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.119615   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:59.119621   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:59.119667   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:59.152087   73900 cri.go:89] found id: ""
	I0930 21:11:59.152111   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.152120   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:59.152127   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:59.152172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:59.186177   73900 cri.go:89] found id: ""
	I0930 21:11:59.186205   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.186213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:59.186220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:59.186276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:59.218800   73900 cri.go:89] found id: ""
	I0930 21:11:59.218821   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.218829   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:59.218835   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:59.218893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:59.254335   73900 cri.go:89] found id: ""
	I0930 21:11:59.254361   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.254372   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:59.254378   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:59.254432   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:59.292406   73900 cri.go:89] found id: ""
	I0930 21:11:59.292441   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.292453   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:59.292460   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:59.292522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:59.333352   73900 cri.go:89] found id: ""
	I0930 21:11:59.333388   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.333399   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:59.333406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:59.333481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:59.377031   73900 cri.go:89] found id: ""
	I0930 21:11:59.377056   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.377064   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:59.377072   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:59.377084   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:59.392626   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:59.392655   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:59.473714   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:59.473741   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:59.473754   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:59.548895   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:59.548931   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:59.589007   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:59.589039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.139243   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:02.152335   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:02.152415   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:02.186942   73900 cri.go:89] found id: ""
	I0930 21:12:02.186980   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.186991   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:02.186999   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:02.187061   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:02.219738   73900 cri.go:89] found id: ""
	I0930 21:12:02.219759   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.219768   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:02.219773   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:02.219820   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:02.253667   73900 cri.go:89] found id: ""
	I0930 21:12:02.253698   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.253707   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:02.253712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:02.253760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:02.290078   73900 cri.go:89] found id: ""
	I0930 21:12:02.290105   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.290115   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:02.290122   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:02.290182   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:02.326408   73900 cri.go:89] found id: ""
	I0930 21:12:02.326436   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.326448   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:02.326455   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:02.326509   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:02.360608   73900 cri.go:89] found id: ""
	I0930 21:12:02.360641   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.360649   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:02.360655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:02.360714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:02.396140   73900 cri.go:89] found id: ""
	I0930 21:12:02.396166   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.396176   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:02.396182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:02.396236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:02.429905   73900 cri.go:89] found id: ""
	I0930 21:12:02.429947   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.429958   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:02.429968   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:02.429986   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:02.506600   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:02.506645   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:02.549325   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:02.549354   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.603614   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:02.603659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:02.618832   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:02.618859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:02.692491   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:05.193131   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:05.206133   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:05.206192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:05.238403   73900 cri.go:89] found id: ""
	I0930 21:12:05.238431   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.238439   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:05.238447   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:05.238523   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:05.271261   73900 cri.go:89] found id: ""
	I0930 21:12:05.271290   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.271303   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:05.271310   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:05.271378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:05.307718   73900 cri.go:89] found id: ""
	I0930 21:12:05.307749   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.307760   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:05.307767   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:05.307832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:05.341336   73900 cri.go:89] found id: ""
	I0930 21:12:05.341379   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.341390   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:05.341398   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:05.341461   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:05.374998   73900 cri.go:89] found id: ""
	I0930 21:12:05.375024   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.375032   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:05.375037   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:05.375085   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:05.410133   73900 cri.go:89] found id: ""
	I0930 21:12:05.410163   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.410174   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:05.410182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:05.410248   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:05.446197   73900 cri.go:89] found id: ""
	I0930 21:12:05.446227   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.446238   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:05.446246   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:05.446305   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:05.480638   73900 cri.go:89] found id: ""
	I0930 21:12:05.480667   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.480683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:05.480691   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:05.480702   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:05.532473   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:05.532512   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:05.547068   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:05.547096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:05.621444   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:05.621472   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:05.621487   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:05.707712   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:05.707767   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.248038   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:08.261409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:08.261485   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:08.305564   73900 cri.go:89] found id: ""
	I0930 21:12:08.305591   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.305601   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:08.305610   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:08.305669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:08.347816   73900 cri.go:89] found id: ""
	I0930 21:12:08.347844   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.347852   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:08.347858   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:08.347927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:08.381662   73900 cri.go:89] found id: ""
	I0930 21:12:08.381695   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.381705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:08.381712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:08.381829   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:08.427366   73900 cri.go:89] found id: ""
	I0930 21:12:08.427396   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.427406   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:08.427413   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:08.427476   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:08.463419   73900 cri.go:89] found id: ""
	I0930 21:12:08.463443   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.463451   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:08.463457   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:08.463508   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:08.496999   73900 cri.go:89] found id: ""
	I0930 21:12:08.497023   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.497033   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:08.497040   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:08.497098   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:08.530410   73900 cri.go:89] found id: ""
	I0930 21:12:08.530434   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.530442   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:08.530447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:08.530495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:08.563191   73900 cri.go:89] found id: ""
	I0930 21:12:08.563224   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.563235   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:08.563244   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:08.563258   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:08.640305   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:08.640341   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.676404   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:08.676431   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:08.729676   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:08.729736   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:08.743282   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:08.743310   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:08.811334   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.311643   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:11.329153   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:11.329229   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:11.369804   73900 cri.go:89] found id: ""
	I0930 21:12:11.369829   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.369838   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:11.369843   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:11.369896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:11.408530   73900 cri.go:89] found id: ""
	I0930 21:12:11.408558   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.408569   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:11.408580   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:11.408663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.446123   73900 cri.go:89] found id: ""
	I0930 21:12:11.446147   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.446155   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:11.446160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:11.446206   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:11.484019   73900 cri.go:89] found id: ""
	I0930 21:12:11.484044   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.484052   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:11.484057   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:11.484118   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:11.521934   73900 cri.go:89] found id: ""
	I0930 21:12:11.521961   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.521971   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:11.521979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:11.522042   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:11.561253   73900 cri.go:89] found id: ""
	I0930 21:12:11.561283   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.561293   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:11.561299   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:11.561352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:11.602610   73900 cri.go:89] found id: ""
	I0930 21:12:11.602637   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.602648   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:11.602655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:11.602760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:11.637146   73900 cri.go:89] found id: ""
	I0930 21:12:11.637174   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.637185   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:11.637194   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:11.637208   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:11.707627   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.707651   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:11.707668   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:11.786047   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:11.786091   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:11.827128   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:11.827157   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:11.885504   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:11.885542   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:14.400848   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:14.413794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:14.413882   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:14.449799   73900 cri.go:89] found id: ""
	I0930 21:12:14.449830   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.449841   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:14.449849   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:14.449902   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:14.486301   73900 cri.go:89] found id: ""
	I0930 21:12:14.486330   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.486357   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:14.486365   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:14.486427   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:14.520451   73900 cri.go:89] found id: ""
	I0930 21:12:14.520479   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.520487   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:14.520497   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:14.520558   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:14.554056   73900 cri.go:89] found id: ""
	I0930 21:12:14.554095   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.554107   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:14.554114   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:14.554178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:14.594054   73900 cri.go:89] found id: ""
	I0930 21:12:14.594080   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.594088   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:14.594094   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:14.594142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:14.630225   73900 cri.go:89] found id: ""
	I0930 21:12:14.630255   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.630278   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:14.630284   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:14.630335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:14.663006   73900 cri.go:89] found id: ""
	I0930 21:12:14.663043   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.663054   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:14.663061   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:14.663119   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:14.699815   73900 cri.go:89] found id: ""
	I0930 21:12:14.699845   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.699858   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:14.699870   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:14.699886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:14.751465   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:14.751509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:14.766401   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:14.766432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:14.832979   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:14.833002   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:14.833016   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:14.918011   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:14.918051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.458886   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:17.471833   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:17.471918   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:17.505109   73900 cri.go:89] found id: ""
	I0930 21:12:17.505135   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.505145   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:17.505151   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:17.505213   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:17.538091   73900 cri.go:89] found id: ""
	I0930 21:12:17.538118   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.538129   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:17.538136   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:17.538308   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:17.571668   73900 cri.go:89] found id: ""
	I0930 21:12:17.571694   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.571705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:17.571712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:17.571770   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:17.607391   73900 cri.go:89] found id: ""
	I0930 21:12:17.607431   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.607442   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:17.607452   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:17.607519   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:17.643271   73900 cri.go:89] found id: ""
	I0930 21:12:17.643297   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.643305   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:17.643313   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:17.643382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:17.676653   73900 cri.go:89] found id: ""
	I0930 21:12:17.676687   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.676698   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:17.676708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:17.676772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:17.709570   73900 cri.go:89] found id: ""
	I0930 21:12:17.709602   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.709610   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:17.709615   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:17.709671   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:17.747857   73900 cri.go:89] found id: ""
	I0930 21:12:17.747883   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.747891   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:17.747902   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:17.747915   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:17.824584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:17.824623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.862613   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:17.862643   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:17.915954   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:17.915992   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:17.929824   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:17.929853   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:17.999697   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:20.500449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:20.514042   73900 kubeadm.go:597] duration metric: took 4m1.91059878s to restartPrimaryControlPlane
	W0930 21:12:20.514119   73900 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 21:12:20.514158   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:12:21.675376   73900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.161176988s)
	I0930 21:12:21.675465   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:21.689467   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:12:21.698504   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:12:21.708418   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:12:21.708437   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:12:21.708483   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:12:21.716960   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:12:21.717019   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:12:21.727610   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:12:21.736212   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:12:21.736275   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:12:21.745512   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.754299   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:12:21.754366   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.763724   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:12:21.772521   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:12:21.772595   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:12:21.782980   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:12:21.850463   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:12:21.850558   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:12:21.991521   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:12:21.991706   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:12:21.991849   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:12:22.174876   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:12:22.177037   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:12:22.177155   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:12:22.177253   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:12:22.177379   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:12:22.178789   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:12:22.178860   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:12:22.178907   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:12:22.178961   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:12:22.179017   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:12:22.179139   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:12:22.179247   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:12:22.179310   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:12:22.179398   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:12:22.253256   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:12:22.661237   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:12:22.947987   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:12:23.170995   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:12:23.184583   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:12:23.185770   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:12:23.185813   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:12:23.334769   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:12:23.336604   73900 out.go:235]   - Booting up control plane ...
	I0930 21:12:23.336747   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:12:23.345737   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:12:23.346784   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:12:23.347559   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:12:23.351009   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:13:03.351822   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:13:03.352632   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:03.352833   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:08.353230   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:08.353429   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:18.354150   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:18.354468   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:38.355123   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:38.355330   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357098   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:14:18.357396   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357419   73900 kubeadm.go:310] 
	I0930 21:14:18.357473   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:14:18.357541   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:14:18.357554   73900 kubeadm.go:310] 
	I0930 21:14:18.357609   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:14:18.357659   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:14:18.357801   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:14:18.357817   73900 kubeadm.go:310] 
	I0930 21:14:18.357964   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:14:18.357996   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:14:18.358028   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:14:18.358039   73900 kubeadm.go:310] 
	I0930 21:14:18.358174   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:14:18.358318   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:14:18.358331   73900 kubeadm.go:310] 
	I0930 21:14:18.358510   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:14:18.358646   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:14:18.358764   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:14:18.358866   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:14:18.358882   73900 kubeadm.go:310] 
	I0930 21:14:18.359454   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:14:18.359595   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:14:18.359681   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0930 21:14:18.359797   73900 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0930 21:14:18.359841   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:14:18.820244   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:14:18.834938   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:14:18.844779   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:14:18.844803   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:14:18.844856   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:14:18.853738   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:14:18.853811   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:14:18.863366   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:14:18.872108   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:14:18.872164   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:14:18.881818   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.890916   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:14:18.890969   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.900075   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:14:18.908449   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:14:18.908520   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:14:18.917163   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:14:18.983181   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:14:18.983233   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:14:19.121356   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:14:19.121545   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:14:19.121674   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:14:19.306639   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:14:19.309593   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:14:19.309683   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:14:19.309748   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:14:19.309870   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:14:19.309957   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:14:19.310040   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:14:19.310119   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:14:19.310209   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:14:19.310292   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:14:19.310404   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:14:19.310511   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:14:19.310567   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:14:19.310654   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:14:19.453872   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:14:19.621232   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:14:19.797694   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:14:19.886897   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:14:19.909016   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:14:19.910536   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:14:19.910617   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:14:20.052878   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:14:20.054739   73900 out.go:235]   - Booting up control plane ...
	I0930 21:14:20.054881   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:14:20.068419   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:14:20.068512   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:14:20.068697   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:14:20.072015   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:15:00.073988   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:15:00.074795   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:00.075068   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:05.075810   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:05.076061   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:15.076695   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:15.076928   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:35.077652   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:35.077862   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.076816   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:16:15.077063   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.077082   73900 kubeadm.go:310] 
	I0930 21:16:15.077136   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:16:15.077188   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:16:15.077198   73900 kubeadm.go:310] 
	I0930 21:16:15.077246   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:16:15.077298   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:16:15.077425   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:16:15.077442   73900 kubeadm.go:310] 
	I0930 21:16:15.077605   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:16:15.077651   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:16:15.077710   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:16:15.077718   73900 kubeadm.go:310] 
	I0930 21:16:15.077851   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:16:15.077997   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:16:15.078013   73900 kubeadm.go:310] 
	I0930 21:16:15.078143   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:16:15.078229   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:16:15.078309   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:16:15.078419   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:16:15.078431   73900 kubeadm.go:310] 
	I0930 21:16:15.079235   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:16:15.079365   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:16:15.079442   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0930 21:16:15.079572   73900 kubeadm.go:394] duration metric: took 7m56.529269567s to StartCluster
	I0930 21:16:15.079639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:16:15.079713   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:16:15.122057   73900 cri.go:89] found id: ""
	I0930 21:16:15.122086   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.122098   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:16:15.122105   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:16:15.122166   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:16:15.156244   73900 cri.go:89] found id: ""
	I0930 21:16:15.156278   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.156289   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:16:15.156297   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:16:15.156357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:16:15.188952   73900 cri.go:89] found id: ""
	I0930 21:16:15.188977   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.188989   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:16:15.188996   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:16:15.189058   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:16:15.219400   73900 cri.go:89] found id: ""
	I0930 21:16:15.219427   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.219435   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:16:15.219441   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:16:15.219501   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:16:15.252049   73900 cri.go:89] found id: ""
	I0930 21:16:15.252078   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.252086   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:16:15.252093   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:16:15.252150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:16:15.286560   73900 cri.go:89] found id: ""
	I0930 21:16:15.286594   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.286605   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:16:15.286614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:16:15.286679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:16:15.319140   73900 cri.go:89] found id: ""
	I0930 21:16:15.319178   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.319187   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:16:15.319192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:16:15.319245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:16:15.351299   73900 cri.go:89] found id: ""
	I0930 21:16:15.351322   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.351330   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:16:15.351339   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:16:15.351350   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:16:15.402837   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:16:15.402882   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:16:15.417111   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:16:15.417140   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:16:15.492593   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:16:15.492614   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:16:15.492627   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:16:15.621646   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:16:15.621681   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0930 21:16:15.660480   73900 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0930 21:16:15.660528   73900 out.go:270] * 
	* 
	W0930 21:16:15.660580   73900 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.660595   73900 out.go:270] * 
	* 
	W0930 21:16:15.661387   73900 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 21:16:15.665510   73900 out.go:201] 
	W0930 21:16:15.667332   73900 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.667373   73900 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0930 21:16:15.667390   73900 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0930 21:16:15.668812   73900 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-621406 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406: exit status 2 (227.538839ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-621406 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-621406 logs -n 25: (1.595342409s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo find                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo crio                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-207733                                      | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-741890 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | disable-driver-mounts-741890                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 21:00 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-256103            | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-997816             | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-291511  | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-621406        | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256103                 | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-997816                  | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-291511       | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:12 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-621406             | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 21:03:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 21:03:42.750102   73900 out.go:345] Setting OutFile to fd 1 ...
	I0930 21:03:42.750367   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750377   73900 out.go:358] Setting ErrFile to fd 2...
	I0930 21:03:42.750383   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750578   73900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 21:03:42.751109   73900 out.go:352] Setting JSON to false
	I0930 21:03:42.752040   73900 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6366,"bootTime":1727723857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 21:03:42.752140   73900 start.go:139] virtualization: kvm guest
	I0930 21:03:42.754146   73900 out.go:177] * [old-k8s-version-621406] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 21:03:42.755446   73900 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 21:03:42.755456   73900 notify.go:220] Checking for updates...
	I0930 21:03:42.758261   73900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 21:03:42.759566   73900 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:03:42.760907   73900 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 21:03:42.762342   73900 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 21:03:42.763561   73900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 21:03:42.765356   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:03:42.765773   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.765822   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.780605   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45071
	I0930 21:03:42.781022   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.781550   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.781583   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.781912   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.782160   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.784603   73900 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 21:03:42.785760   73900 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 21:03:42.786115   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.786156   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.800937   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0930 21:03:42.801409   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.801882   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.801905   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.802216   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.802397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.838423   73900 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 21:03:42.839832   73900 start.go:297] selected driver: kvm2
	I0930 21:03:42.839847   73900 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.839953   73900 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 21:03:42.840605   73900 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.840667   73900 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 21:03:42.856119   73900 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 21:03:42.856550   73900 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:03:42.856580   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:03:42.856630   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:03:42.856665   73900 start.go:340] cluster config:
	{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.856778   73900 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.858732   73900 out.go:177] * Starting "old-k8s-version-621406" primary control-plane node in "old-k8s-version-621406" cluster
	I0930 21:03:42.859876   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:03:42.859912   73900 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 21:03:42.859929   73900 cache.go:56] Caching tarball of preloaded images
	I0930 21:03:42.860020   73900 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 21:03:42.860031   73900 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0930 21:03:42.860153   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:03:42.860340   73900 start.go:360] acquireMachinesLock for old-k8s-version-621406: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:03:44.619810   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:47.691872   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:53.771838   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:56.843848   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:02.923822   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:05.995871   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:12.075814   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:15.147854   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:21.227790   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:24.299842   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:30.379801   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:33.451787   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:39.531808   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:42.603838   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:48.683904   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:51.755939   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:57.835834   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:00.907789   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:06.987875   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:10.059892   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:16.139832   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:19.211908   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:25.291812   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:28.363915   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:34.443827   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:37.515928   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:43.595824   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:46.667934   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:52.747851   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:55.819883   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:01.899789   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:04.971946   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:11.051812   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:14.123833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:20.203805   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:23.275875   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:29.355806   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:32.427931   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:38.507837   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:41.579909   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:47.659786   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:50.731827   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:56.811833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:59.883878   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:05.963833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:09.035828   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:12.040058   73375 start.go:364] duration metric: took 4m26.951572628s to acquireMachinesLock for "no-preload-997816"
	I0930 21:07:12.040115   73375 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:12.040126   73375 fix.go:54] fixHost starting: 
	I0930 21:07:12.040448   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:12.040485   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:12.057054   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0930 21:07:12.057624   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:12.058143   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:12.058173   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:12.058523   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:12.058739   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:12.058873   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:12.060479   73375 fix.go:112] recreateIfNeeded on no-preload-997816: state=Stopped err=<nil>
	I0930 21:07:12.060499   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	W0930 21:07:12.060640   73375 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:12.062653   73375 out.go:177] * Restarting existing kvm2 VM for "no-preload-997816" ...
	I0930 21:07:12.037683   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:12.037732   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:07:12.038031   73256 buildroot.go:166] provisioning hostname "embed-certs-256103"
	I0930 21:07:12.038055   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:07:12.038234   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:07:12.039910   73256 machine.go:96] duration metric: took 4m37.42208497s to provisionDockerMachine
	I0930 21:07:12.039954   73256 fix.go:56] duration metric: took 4m37.444804798s for fixHost
	I0930 21:07:12.039962   73256 start.go:83] releasing machines lock for "embed-certs-256103", held for 4m37.444833727s
	W0930 21:07:12.039989   73256 start.go:714] error starting host: provision: host is not running
	W0930 21:07:12.040104   73256 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0930 21:07:12.040116   73256 start.go:729] Will try again in 5 seconds ...
	I0930 21:07:12.063941   73375 main.go:141] libmachine: (no-preload-997816) Calling .Start
	I0930 21:07:12.064167   73375 main.go:141] libmachine: (no-preload-997816) Ensuring networks are active...
	I0930 21:07:12.065080   73375 main.go:141] libmachine: (no-preload-997816) Ensuring network default is active
	I0930 21:07:12.065489   73375 main.go:141] libmachine: (no-preload-997816) Ensuring network mk-no-preload-997816 is active
	I0930 21:07:12.065993   73375 main.go:141] libmachine: (no-preload-997816) Getting domain xml...
	I0930 21:07:12.066923   73375 main.go:141] libmachine: (no-preload-997816) Creating domain...
	I0930 21:07:13.297091   73375 main.go:141] libmachine: (no-preload-997816) Waiting to get IP...
	I0930 21:07:13.297965   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.298386   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.298473   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.298370   74631 retry.go:31] will retry after 312.032565ms: waiting for machine to come up
	I0930 21:07:13.612088   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.612583   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.612607   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.612519   74631 retry.go:31] will retry after 292.985742ms: waiting for machine to come up
	I0930 21:07:13.907355   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.907794   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.907817   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.907754   74631 retry.go:31] will retry after 451.618632ms: waiting for machine to come up
	I0930 21:07:14.361536   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:14.361990   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:14.362054   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:14.361947   74631 retry.go:31] will retry after 599.246635ms: waiting for machine to come up
	I0930 21:07:14.962861   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:14.963341   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:14.963369   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:14.963294   74631 retry.go:31] will retry after 748.726096ms: waiting for machine to come up
	I0930 21:07:17.040758   73256 start.go:360] acquireMachinesLock for embed-certs-256103: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:07:15.713258   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:15.713576   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:15.713601   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:15.713525   74631 retry.go:31] will retry after 907.199669ms: waiting for machine to come up
	I0930 21:07:16.622784   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:16.623275   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:16.623307   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:16.623211   74631 retry.go:31] will retry after 744.978665ms: waiting for machine to come up
	I0930 21:07:17.369735   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:17.370206   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:17.370231   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:17.370154   74631 retry.go:31] will retry after 1.238609703s: waiting for machine to come up
	I0930 21:07:18.610618   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:18.610967   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:18.610989   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:18.610928   74631 retry.go:31] will retry after 1.354775356s: waiting for machine to come up
	I0930 21:07:19.967473   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:19.967892   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:19.967916   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:19.967851   74631 retry.go:31] will retry after 2.26449082s: waiting for machine to come up
	I0930 21:07:22.234066   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:22.234514   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:22.234536   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:22.234474   74631 retry.go:31] will retry after 2.728158374s: waiting for machine to come up
	I0930 21:07:24.966375   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:24.966759   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:24.966782   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:24.966724   74631 retry.go:31] will retry after 3.119117729s: waiting for machine to come up
	I0930 21:07:29.336238   73707 start.go:364] duration metric: took 3m58.92874513s to acquireMachinesLock for "default-k8s-diff-port-291511"
	I0930 21:07:29.336327   73707 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:29.336347   73707 fix.go:54] fixHost starting: 
	I0930 21:07:29.336726   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:29.336779   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:29.354404   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0930 21:07:29.354848   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:29.355331   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:07:29.355352   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:29.355882   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:29.356081   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:29.356249   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:07:29.358109   73707 fix.go:112] recreateIfNeeded on default-k8s-diff-port-291511: state=Stopped err=<nil>
	I0930 21:07:29.358155   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	W0930 21:07:29.358336   73707 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:29.361072   73707 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-291511" ...
	I0930 21:07:28.087153   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.087604   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has current primary IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.087636   73375 main.go:141] libmachine: (no-preload-997816) Found IP for machine: 192.168.61.93
	I0930 21:07:28.087644   73375 main.go:141] libmachine: (no-preload-997816) Reserving static IP address...
	I0930 21:07:28.088047   73375 main.go:141] libmachine: (no-preload-997816) Reserved static IP address: 192.168.61.93
	I0930 21:07:28.088068   73375 main.go:141] libmachine: (no-preload-997816) Waiting for SSH to be available...
	I0930 21:07:28.088090   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "no-preload-997816", mac: "52:54:00:cb:3d:73", ip: "192.168.61.93"} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.088158   73375 main.go:141] libmachine: (no-preload-997816) DBG | skip adding static IP to network mk-no-preload-997816 - found existing host DHCP lease matching {name: "no-preload-997816", mac: "52:54:00:cb:3d:73", ip: "192.168.61.93"}
	I0930 21:07:28.088181   73375 main.go:141] libmachine: (no-preload-997816) DBG | Getting to WaitForSSH function...
	I0930 21:07:28.090195   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.090522   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.090547   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.090722   73375 main.go:141] libmachine: (no-preload-997816) DBG | Using SSH client type: external
	I0930 21:07:28.090739   73375 main.go:141] libmachine: (no-preload-997816) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa (-rw-------)
	I0930 21:07:28.090767   73375 main.go:141] libmachine: (no-preload-997816) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.93 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:07:28.090787   73375 main.go:141] libmachine: (no-preload-997816) DBG | About to run SSH command:
	I0930 21:07:28.090801   73375 main.go:141] libmachine: (no-preload-997816) DBG | exit 0
	I0930 21:07:28.211669   73375 main.go:141] libmachine: (no-preload-997816) DBG | SSH cmd err, output: <nil>: 
	I0930 21:07:28.212073   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetConfigRaw
	I0930 21:07:28.212714   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:28.215442   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.215934   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.215951   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.216186   73375 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/config.json ...
	I0930 21:07:28.216370   73375 machine.go:93] provisionDockerMachine start ...
	I0930 21:07:28.216386   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:28.216575   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.218963   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.219423   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.219455   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.219604   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.219770   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.219948   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.220057   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.220252   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.220441   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.220452   73375 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:07:28.315814   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:07:28.315853   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.316131   73375 buildroot.go:166] provisioning hostname "no-preload-997816"
	I0930 21:07:28.316161   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.316372   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.319253   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.319506   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.319548   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.319711   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.319903   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.320057   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.320182   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.320383   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.320592   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.320606   73375 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-997816 && echo "no-preload-997816" | sudo tee /etc/hostname
	I0930 21:07:28.433652   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-997816
	
	I0930 21:07:28.433686   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.436989   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.437350   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.437389   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.437611   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.437784   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.437957   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.438075   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.438267   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.438487   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.438512   73375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-997816' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-997816/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-997816' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:07:28.544056   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:28.544088   73375 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:07:28.544112   73375 buildroot.go:174] setting up certificates
	I0930 21:07:28.544122   73375 provision.go:84] configureAuth start
	I0930 21:07:28.544135   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.544418   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:28.546960   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.547363   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.547384   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.547570   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.549918   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.550325   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.550353   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.550535   73375 provision.go:143] copyHostCerts
	I0930 21:07:28.550612   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:07:28.550627   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:07:28.550711   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:07:28.550804   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:07:28.550812   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:07:28.550837   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:07:28.550893   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:07:28.550900   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:07:28.550920   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:07:28.550967   73375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.no-preload-997816 san=[127.0.0.1 192.168.61.93 localhost minikube no-preload-997816]
	I0930 21:07:28.744306   73375 provision.go:177] copyRemoteCerts
	I0930 21:07:28.744364   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:07:28.744386   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.747024   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.747368   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.747401   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.747615   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.747813   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.747973   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.748133   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:28.825616   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0930 21:07:28.849513   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:07:28.872666   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:07:28.895673   73375 provision.go:87] duration metric: took 351.536833ms to configureAuth
	I0930 21:07:28.895708   73375 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:07:28.895896   73375 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:28.895975   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.898667   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.899067   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.899098   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.899324   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.899567   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.899703   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.899829   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.899946   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.900120   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.900134   73375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:07:29.113855   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:07:29.113877   73375 machine.go:96] duration metric: took 897.495238ms to provisionDockerMachine
	I0930 21:07:29.113887   73375 start.go:293] postStartSetup for "no-preload-997816" (driver="kvm2")
	I0930 21:07:29.113897   73375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:07:29.113921   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.114220   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:07:29.114254   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.117274   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.117619   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.117663   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.117816   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.118010   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.118159   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.118289   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.197962   73375 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:07:29.202135   73375 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:07:29.202166   73375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:07:29.202237   73375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:07:29.202321   73375 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:07:29.202406   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:07:29.211693   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:29.234503   73375 start.go:296] duration metric: took 120.601484ms for postStartSetup
	I0930 21:07:29.234582   73375 fix.go:56] duration metric: took 17.194433455s for fixHost
	I0930 21:07:29.234610   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.237134   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.237544   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.237574   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.237728   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.237912   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.238085   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.238199   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.238348   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:29.238506   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:29.238515   73375 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:07:29.336092   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730449.310327649
	
	I0930 21:07:29.336114   73375 fix.go:216] guest clock: 1727730449.310327649
	I0930 21:07:29.336123   73375 fix.go:229] Guest: 2024-09-30 21:07:29.310327649 +0000 UTC Remote: 2024-09-30 21:07:29.234588814 +0000 UTC m=+284.288095935 (delta=75.738835ms)
	I0930 21:07:29.336147   73375 fix.go:200] guest clock delta is within tolerance: 75.738835ms
	I0930 21:07:29.336153   73375 start.go:83] releasing machines lock for "no-preload-997816", held for 17.296055752s
	I0930 21:07:29.336194   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.336478   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:29.339488   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.339864   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.339909   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.340070   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340525   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340697   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340800   73375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:07:29.340836   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.340930   73375 ssh_runner.go:195] Run: cat /version.json
	I0930 21:07:29.340955   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.343579   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.343941   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.343976   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344010   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344228   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.344405   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.344441   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.344471   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344543   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.344616   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.344689   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.344784   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.344966   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.345105   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.420949   73375 ssh_runner.go:195] Run: systemctl --version
	I0930 21:07:29.465854   73375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:07:29.616360   73375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:07:29.624522   73375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:07:29.624604   73375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:07:29.642176   73375 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:07:29.642202   73375 start.go:495] detecting cgroup driver to use...
	I0930 21:07:29.642279   73375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:07:29.657878   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:07:29.674555   73375 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:07:29.674614   73375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:07:29.690953   73375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:07:29.705425   73375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:07:29.814602   73375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:07:29.957009   73375 docker.go:233] disabling docker service ...
	I0930 21:07:29.957091   73375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:07:29.971419   73375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:07:29.362775   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Start
	I0930 21:07:29.363023   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring networks are active...
	I0930 21:07:29.364071   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring network default is active
	I0930 21:07:29.364456   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring network mk-default-k8s-diff-port-291511 is active
	I0930 21:07:29.364940   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Getting domain xml...
	I0930 21:07:29.365759   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Creating domain...
	I0930 21:07:29.987509   73375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:07:30.112952   73375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:07:30.239945   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:07:30.253298   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:07:30.271687   73375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:07:30.271768   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.282267   73375 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:07:30.282339   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.292776   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.303893   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.315002   73375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:07:30.326410   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.336951   73375 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.356016   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.367847   73375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:07:30.378650   73375 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:07:30.378703   73375 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:07:30.391768   73375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:07:30.401887   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:30.534771   73375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:07:30.622017   73375 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:07:30.622087   73375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:07:30.627221   73375 start.go:563] Will wait 60s for crictl version
	I0930 21:07:30.627294   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:30.633071   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:07:30.675743   73375 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:07:30.675830   73375 ssh_runner.go:195] Run: crio --version
	I0930 21:07:30.703470   73375 ssh_runner.go:195] Run: crio --version
	I0930 21:07:30.732424   73375 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:07:30.733714   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:30.737016   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:30.737380   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:30.737421   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:30.737690   73375 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0930 21:07:30.741714   73375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:30.754767   73375 kubeadm.go:883] updating cluster {Name:no-preload-997816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:07:30.754892   73375 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:07:30.754941   73375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:30.794489   73375 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:07:30.794516   73375 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 21:07:30.794605   73375 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:30.794624   73375 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:30.794653   73375 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:30.794694   73375 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:30.794733   73375 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:30.794691   73375 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:30.794822   73375 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:30.794836   73375 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0930 21:07:30.796508   73375 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:30.796521   73375 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:30.796538   73375 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:30.796543   73375 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:30.796610   73375 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:30.796616   73375 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:30.796611   73375 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0930 21:07:30.796665   73375 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.018683   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0930 21:07:31.028097   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.117252   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.131998   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.136871   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.140418   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.170883   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.171059   73375 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0930 21:07:31.171098   73375 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.171142   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.172908   73375 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0930 21:07:31.172951   73375 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.172994   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.242489   73375 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0930 21:07:31.242541   73375 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.242609   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.246685   73375 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0930 21:07:31.246731   73375 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.246758   73375 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0930 21:07:31.246778   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.246794   73375 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.246837   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.270923   73375 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0930 21:07:31.270971   73375 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.271024   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.271030   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.271100   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.271109   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.271207   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.271269   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.387993   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.388011   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.388044   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.388091   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.388150   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.388230   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.523098   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.523156   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.523300   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.523344   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.523467   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.623696   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0930 21:07:31.623759   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.623778   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0930 21:07:31.623794   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:31.623869   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:31.632927   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0930 21:07:31.633014   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.633117   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.633206   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0930 21:07:31.633269   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:31.648925   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0930 21:07:31.648945   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.648983   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.676886   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0930 21:07:31.676925   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0930 21:07:31.709210   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0930 21:07:31.709287   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0930 21:07:31.709331   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:31.709394   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:31.709330   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0930 21:07:32.112418   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:33.634620   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.985614953s)
	I0930 21:07:33.634656   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0930 21:07:33.634702   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.925342294s)
	I0930 21:07:33.634716   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:33.634731   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0930 21:07:33.634771   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.925359685s)
	I0930 21:07:33.634779   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:33.634782   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0930 21:07:33.634853   73375 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.522405881s)
	I0930 21:07:33.634891   73375 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0930 21:07:33.634913   73375 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:33.634961   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:30.643828   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting to get IP...
	I0930 21:07:30.644936   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.645382   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.645484   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:30.645381   74769 retry.go:31] will retry after 216.832119ms: waiting for machine to come up
	I0930 21:07:30.863953   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.864583   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.864614   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:30.864518   74769 retry.go:31] will retry after 280.448443ms: waiting for machine to come up
	I0930 21:07:31.147184   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.147792   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.147826   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.147728   74769 retry.go:31] will retry after 345.517763ms: waiting for machine to come up
	I0930 21:07:31.495391   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.495819   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.495841   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.495786   74769 retry.go:31] will retry after 457.679924ms: waiting for machine to come up
	I0930 21:07:31.955479   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.955943   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.955974   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.955897   74769 retry.go:31] will retry after 562.95605ms: waiting for machine to come up
	I0930 21:07:32.520890   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:32.521339   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:32.521368   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:32.521285   74769 retry.go:31] will retry after 743.560182ms: waiting for machine to come up
	I0930 21:07:33.266407   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:33.266914   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:33.266941   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:33.266853   74769 retry.go:31] will retry after 947.444427ms: waiting for machine to come up
	I0930 21:07:34.216195   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:34.216705   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:34.216731   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:34.216659   74769 retry.go:31] will retry after 1.186059526s: waiting for machine to come up
	I0930 21:07:35.714633   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.079826486s)
	I0930 21:07:35.714667   73375 ssh_runner.go:235] Completed: which crictl: (2.079690884s)
	I0930 21:07:35.714721   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:35.714670   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0930 21:07:35.714786   73375 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:35.714821   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:35.753242   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:39.088354   73375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.335055656s)
	I0930 21:07:39.088395   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.373547177s)
	I0930 21:07:39.088422   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0930 21:07:39.088458   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:39.088536   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:39.088459   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:35.404773   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:35.405334   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:35.405359   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:35.405225   74769 retry.go:31] will retry after 1.575803783s: waiting for machine to come up
	I0930 21:07:36.983196   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:36.983730   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:36.983759   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:36.983677   74769 retry.go:31] will retry after 2.020561586s: waiting for machine to come up
	I0930 21:07:39.006915   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:39.007304   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:39.007334   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:39.007269   74769 retry.go:31] will retry after 2.801421878s: waiting for machine to come up
	I0930 21:07:41.074012   73375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.985398095s)
	I0930 21:07:41.074061   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0930 21:07:41.074154   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.985588774s)
	I0930 21:07:41.074183   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0930 21:07:41.074202   73375 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:41.074244   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:41.074166   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:42.972016   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.897745882s)
	I0930 21:07:42.972055   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0930 21:07:42.972083   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.8977868s)
	I0930 21:07:42.972110   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0930 21:07:42.972086   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:42.972155   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:44.835190   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.863005436s)
	I0930 21:07:44.835237   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0930 21:07:44.835263   73375 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:44.835334   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:41.810719   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:41.811099   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:41.811117   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:41.811050   74769 retry.go:31] will retry after 2.703489988s: waiting for machine to come up
	I0930 21:07:44.515949   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:44.516329   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:44.516356   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:44.516276   74769 retry.go:31] will retry after 4.001267434s: waiting for machine to come up
	I0930 21:07:49.889033   73900 start.go:364] duration metric: took 4m7.028659379s to acquireMachinesLock for "old-k8s-version-621406"
	I0930 21:07:49.889104   73900 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:49.889111   73900 fix.go:54] fixHost starting: 
	I0930 21:07:49.889542   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:49.889600   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:49.906767   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0930 21:07:49.907283   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:49.907856   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:07:49.907889   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:49.908203   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:49.908397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:07:49.908542   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetState
	I0930 21:07:49.910270   73900 fix.go:112] recreateIfNeeded on old-k8s-version-621406: state=Stopped err=<nil>
	I0930 21:07:49.910306   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	W0930 21:07:49.910441   73900 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:49.912646   73900 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-621406" ...
	I0930 21:07:45.483728   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0930 21:07:45.483778   73375 cache_images.go:123] Successfully loaded all cached images
	I0930 21:07:45.483785   73375 cache_images.go:92] duration metric: took 14.689240439s to LoadCachedImages
	I0930 21:07:45.483799   73375 kubeadm.go:934] updating node { 192.168.61.93 8443 v1.31.1 crio true true} ...
	I0930 21:07:45.483898   73375 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-997816 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:07:45.483977   73375 ssh_runner.go:195] Run: crio config
	I0930 21:07:45.529537   73375 cni.go:84] Creating CNI manager for ""
	I0930 21:07:45.529558   73375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:45.529567   73375 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:07:45.529591   73375 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.93 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-997816 NodeName:no-preload-997816 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:07:45.529713   73375 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-997816"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.93
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.93"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:07:45.529775   73375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:07:45.540251   73375 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:07:45.540323   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:07:45.549622   73375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0930 21:07:45.565425   73375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:07:45.580646   73375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0930 21:07:45.596216   73375 ssh_runner.go:195] Run: grep 192.168.61.93	control-plane.minikube.internal$ /etc/hosts
	I0930 21:07:45.604940   73375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.93	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:45.620809   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:45.751327   73375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:45.768664   73375 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816 for IP: 192.168.61.93
	I0930 21:07:45.768687   73375 certs.go:194] generating shared ca certs ...
	I0930 21:07:45.768702   73375 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:45.768896   73375 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:07:45.768953   73375 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:07:45.768967   73375 certs.go:256] generating profile certs ...
	I0930 21:07:45.769081   73375 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/client.key
	I0930 21:07:45.769188   73375 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.key.c7192a03
	I0930 21:07:45.769251   73375 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.key
	I0930 21:07:45.769422   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:07:45.769468   73375 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:07:45.769483   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:07:45.769527   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:07:45.769569   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:07:45.769603   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:07:45.769672   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:45.770679   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:07:45.809391   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:07:45.837624   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:07:45.878472   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:07:45.909163   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 21:07:45.950655   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 21:07:45.974391   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:07:45.997258   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 21:07:46.019976   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:07:46.042828   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:07:46.066625   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:07:46.089639   73375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:07:46.106202   73375 ssh_runner.go:195] Run: openssl version
	I0930 21:07:46.111810   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:07:46.122379   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.126659   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.126699   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.132363   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:07:46.143074   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:07:46.154060   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.158542   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.158602   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.164210   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:07:46.175160   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:07:46.186326   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.190782   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.190856   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.196356   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:07:46.206957   73375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:07:46.211650   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:07:46.217398   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:07:46.223566   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:07:46.230204   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:07:46.236404   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:07:46.242282   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:07:46.248591   73375 kubeadm.go:392] StartCluster: {Name:no-preload-997816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:07:46.248686   73375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:07:46.248731   73375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:46.292355   73375 cri.go:89] found id: ""
	I0930 21:07:46.292435   73375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:07:46.303578   73375 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:07:46.303598   73375 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:07:46.303668   73375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:07:46.314544   73375 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:07:46.315643   73375 kubeconfig.go:125] found "no-preload-997816" server: "https://192.168.61.93:8443"
	I0930 21:07:46.318243   73375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:07:46.329751   73375 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.93
	I0930 21:07:46.329781   73375 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:07:46.329791   73375 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:07:46.329837   73375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:46.364302   73375 cri.go:89] found id: ""
	I0930 21:07:46.364392   73375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:07:46.384616   73375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:07:46.395855   73375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:07:46.395875   73375 kubeadm.go:157] found existing configuration files:
	
	I0930 21:07:46.395915   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:07:46.405860   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:07:46.405918   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:07:46.416618   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:07:46.426654   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:07:46.426712   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:07:46.435880   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:07:46.446273   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:07:46.446346   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:07:46.457099   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:07:46.467322   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:07:46.467386   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:07:46.477809   73375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:07:46.489024   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:46.605127   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.509287   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.708716   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.780830   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.883843   73375 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:07:47.883940   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.384688   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.884008   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.925804   73375 api_server.go:72] duration metric: took 1.041960261s to wait for apiserver process to appear ...
	I0930 21:07:48.925833   73375 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:07:48.925857   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:48.521282   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.521838   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Found IP for machine: 192.168.50.2
	I0930 21:07:48.521864   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Reserving static IP address...
	I0930 21:07:48.521876   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has current primary IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.522306   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Reserved static IP address: 192.168.50.2
	I0930 21:07:48.522349   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-291511", mac: "52:54:00:27:46:45", ip: "192.168.50.2"} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.522361   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for SSH to be available...
	I0930 21:07:48.522401   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | skip adding static IP to network mk-default-k8s-diff-port-291511 - found existing host DHCP lease matching {name: "default-k8s-diff-port-291511", mac: "52:54:00:27:46:45", ip: "192.168.50.2"}
	I0930 21:07:48.522427   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Getting to WaitForSSH function...
	I0930 21:07:48.525211   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.525641   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.525667   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.525827   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Using SSH client type: external
	I0930 21:07:48.525854   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa (-rw-------)
	I0930 21:07:48.525883   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:07:48.525900   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | About to run SSH command:
	I0930 21:07:48.525913   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | exit 0
	I0930 21:07:48.655656   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | SSH cmd err, output: <nil>: 
	I0930 21:07:48.656045   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetConfigRaw
	I0930 21:07:48.656789   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:48.659902   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.660358   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.660395   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.660586   73707 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/config.json ...
	I0930 21:07:48.660842   73707 machine.go:93] provisionDockerMachine start ...
	I0930 21:07:48.660866   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:48.661063   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.663782   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.664138   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.664165   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.664318   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.664567   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.664733   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.664868   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.665036   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.665283   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.665315   73707 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:07:48.776382   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:07:48.776414   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:48.776676   73707 buildroot.go:166] provisioning hostname "default-k8s-diff-port-291511"
	I0930 21:07:48.776711   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:48.776913   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.779952   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.780470   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.780516   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.780594   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.780773   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.780925   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.781080   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.781253   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.781457   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.781473   73707 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-291511 && echo "default-k8s-diff-port-291511" | sudo tee /etc/hostname
	I0930 21:07:48.913633   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-291511
	
	I0930 21:07:48.913724   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.916869   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.917280   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.917319   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.917501   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.917715   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.917882   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.918117   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.918296   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.918533   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.918562   73707 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-291511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-291511/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-291511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:07:49.048106   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:49.048141   73707 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:07:49.048182   73707 buildroot.go:174] setting up certificates
	I0930 21:07:49.048198   73707 provision.go:84] configureAuth start
	I0930 21:07:49.048212   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:49.048498   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:49.051299   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.051665   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.051702   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.051837   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.054211   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.054512   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.054540   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.054691   73707 provision.go:143] copyHostCerts
	I0930 21:07:49.054774   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:07:49.054789   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:07:49.054866   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:07:49.054982   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:07:49.054994   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:07:49.055021   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:07:49.055097   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:07:49.055106   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:07:49.055130   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:07:49.055189   73707 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-291511 san=[127.0.0.1 192.168.50.2 default-k8s-diff-port-291511 localhost minikube]
	I0930 21:07:49.239713   73707 provision.go:177] copyRemoteCerts
	I0930 21:07:49.239771   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:07:49.239796   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.242146   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.242468   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.242500   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.242663   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.242834   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.242982   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.243200   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.329405   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:07:49.358036   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0930 21:07:49.385742   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:07:49.409436   73707 provision.go:87] duration metric: took 361.22398ms to configureAuth
	I0930 21:07:49.409493   73707 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:07:49.409696   73707 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:49.409798   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.412572   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.412935   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.412975   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.413266   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.413476   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.413680   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.413821   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.414009   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:49.414199   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:49.414223   73707 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:07:49.635490   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:07:49.635553   73707 machine.go:96] duration metric: took 974.696002ms to provisionDockerMachine
	I0930 21:07:49.635567   73707 start.go:293] postStartSetup for "default-k8s-diff-port-291511" (driver="kvm2")
	I0930 21:07:49.635580   73707 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:07:49.635603   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.635954   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:07:49.635989   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.638867   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.639304   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.639340   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.639413   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.639631   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.639837   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.639995   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.728224   73707 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:07:49.732558   73707 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:07:49.732590   73707 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:07:49.732679   73707 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:07:49.732769   73707 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:07:49.732869   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:07:49.742783   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:49.766585   73707 start.go:296] duration metric: took 131.002562ms for postStartSetup
	I0930 21:07:49.766629   73707 fix.go:56] duration metric: took 20.430290493s for fixHost
	I0930 21:07:49.766652   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.769724   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.770143   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.770172   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.770461   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.770708   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.770872   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.771099   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.771240   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:49.771616   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:49.771636   73707 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:07:49.888863   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730469.865719956
	
	I0930 21:07:49.888889   73707 fix.go:216] guest clock: 1727730469.865719956
	I0930 21:07:49.888900   73707 fix.go:229] Guest: 2024-09-30 21:07:49.865719956 +0000 UTC Remote: 2024-09-30 21:07:49.76663417 +0000 UTC m=+259.507652750 (delta=99.085786ms)
	I0930 21:07:49.888943   73707 fix.go:200] guest clock delta is within tolerance: 99.085786ms
	I0930 21:07:49.888950   73707 start.go:83] releasing machines lock for "default-k8s-diff-port-291511", held for 20.552679126s
	I0930 21:07:49.888982   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.889242   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:49.892424   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.892817   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.892854   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.893030   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893601   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893780   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893852   73707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:07:49.893932   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.893934   73707 ssh_runner.go:195] Run: cat /version.json
	I0930 21:07:49.893985   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.896733   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.896843   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897130   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.897179   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897216   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.897233   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897471   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.897478   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.897679   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.897686   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.897825   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.897834   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.897954   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.898097   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:50.022951   73707 ssh_runner.go:195] Run: systemctl --version
	I0930 21:07:50.029177   73707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:07:50.186430   73707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:07:50.193205   73707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:07:50.193277   73707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:07:50.211330   73707 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:07:50.211365   73707 start.go:495] detecting cgroup driver to use...
	I0930 21:07:50.211430   73707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:07:50.227255   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:07:50.241404   73707 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:07:50.241468   73707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:07:50.257879   73707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:07:50.274595   73707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:07:50.394354   73707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:07:50.567503   73707 docker.go:233] disabling docker service ...
	I0930 21:07:50.567582   73707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:07:50.584390   73707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:07:50.600920   73707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:07:50.742682   73707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:07:50.882835   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:07:50.898340   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:07:50.919395   73707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:07:50.919464   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.930773   73707 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:07:50.930846   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.941870   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.952633   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.964281   73707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:07:50.977410   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.988423   73707 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:51.016091   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:51.027473   73707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:07:51.037470   73707 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:07:51.037537   73707 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:07:51.056841   73707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:07:51.068163   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:51.205357   73707 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:07:51.305327   73707 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:07:51.305410   73707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:07:51.311384   73707 start.go:563] Will wait 60s for crictl version
	I0930 21:07:51.311448   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:07:51.315965   73707 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:07:51.369329   73707 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:07:51.369417   73707 ssh_runner.go:195] Run: crio --version
	I0930 21:07:51.399897   73707 ssh_runner.go:195] Run: crio --version
	I0930 21:07:51.431075   73707 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:07:49.914747   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .Start
	I0930 21:07:49.914948   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring networks are active...
	I0930 21:07:49.915796   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network default is active
	I0930 21:07:49.916225   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network mk-old-k8s-version-621406 is active
	I0930 21:07:49.916890   73900 main.go:141] libmachine: (old-k8s-version-621406) Getting domain xml...
	I0930 21:07:49.917688   73900 main.go:141] libmachine: (old-k8s-version-621406) Creating domain...
	I0930 21:07:51.277867   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting to get IP...
	I0930 21:07:51.279001   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.279451   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.279552   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.279437   74917 retry.go:31] will retry after 307.582619ms: waiting for machine to come up
	I0930 21:07:51.589030   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.589414   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.589445   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.589368   74917 retry.go:31] will retry after 370.683214ms: waiting for machine to come up
	I0930 21:07:51.961914   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.962474   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.962511   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.962415   74917 retry.go:31] will retry after 428.703419ms: waiting for machine to come up
	I0930 21:07:52.393154   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.393682   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.393750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.393673   74917 retry.go:31] will retry after 514.254023ms: waiting for machine to come up
	I0930 21:07:52.334804   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:07:52.334846   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:07:52.334863   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.377601   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:07:52.377632   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:07:52.426784   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.473771   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:52.473811   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:52.926391   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.945122   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:52.945154   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:53.426295   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:53.434429   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:53.434464   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:53.926642   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:53.931501   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0930 21:07:53.940069   73375 api_server.go:141] control plane version: v1.31.1
	I0930 21:07:53.940104   73375 api_server.go:131] duration metric: took 5.014262318s to wait for apiserver health ...
	I0930 21:07:53.940115   73375 cni.go:84] Creating CNI manager for ""
	I0930 21:07:53.940123   73375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:53.941879   73375 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:07:53.943335   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:07:53.959585   73375 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:07:53.996310   73375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:07:54.010070   73375 system_pods.go:59] 8 kube-system pods found
	I0930 21:07:54.010129   73375 system_pods.go:61] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:07:54.010142   73375 system_pods.go:61] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:07:54.010157   73375 system_pods.go:61] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:07:54.010174   73375 system_pods.go:61] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:07:54.010186   73375 system_pods.go:61] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:07:54.010200   73375 system_pods.go:61] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:07:54.010212   73375 system_pods.go:61] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:07:54.010223   73375 system_pods.go:61] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:07:54.010232   73375 system_pods.go:74] duration metric: took 13.897885ms to wait for pod list to return data ...
	I0930 21:07:54.010244   73375 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:07:54.019651   73375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:07:54.019683   73375 node_conditions.go:123] node cpu capacity is 2
	I0930 21:07:54.019697   73375 node_conditions.go:105] duration metric: took 9.446744ms to run NodePressure ...
	I0930 21:07:54.019719   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:54.314348   73375 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:07:54.319583   73375 kubeadm.go:739] kubelet initialised
	I0930 21:07:54.319613   73375 kubeadm.go:740] duration metric: took 5.232567ms waiting for restarted kubelet to initialise ...
	I0930 21:07:54.319625   73375 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:07:54.326866   73375 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.333592   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.333628   73375 pod_ready.go:82] duration metric: took 6.72431ms for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.333640   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.333651   73375 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.340155   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "etcd-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.340194   73375 pod_ready.go:82] duration metric: took 6.533127ms for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.340208   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "etcd-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.340216   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.346494   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-apiserver-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.346530   73375 pod_ready.go:82] duration metric: took 6.304143ms for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.346542   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-apiserver-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.346551   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.403699   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.403731   73375 pod_ready.go:82] duration metric: took 57.168471ms for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.403743   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.403752   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.800372   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-proxy-klcv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.800410   73375 pod_ready.go:82] duration metric: took 396.646883ms for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.800423   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-proxy-klcv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.800432   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:51.432761   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:51.436278   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:51.436659   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:51.436700   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:51.436931   73707 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0930 21:07:51.441356   73707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:51.454358   73707 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-291511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:07:51.454484   73707 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:07:51.454547   73707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:51.502072   73707 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:07:51.502143   73707 ssh_runner.go:195] Run: which lz4
	I0930 21:07:51.506458   73707 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:07:51.510723   73707 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:07:51.510756   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 21:07:52.792488   73707 crio.go:462] duration metric: took 1.286075452s to copy over tarball
	I0930 21:07:52.792580   73707 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:07:55.207282   73707 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.414661305s)
	I0930 21:07:55.207314   73707 crio.go:469] duration metric: took 2.414793514s to extract the tarball
	I0930 21:07:55.207321   73707 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:07:55.244001   73707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:55.287097   73707 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 21:07:55.287124   73707 cache_images.go:84] Images are preloaded, skipping loading
	I0930 21:07:55.287133   73707 kubeadm.go:934] updating node { 192.168.50.2 8444 v1.31.1 crio true true} ...
	I0930 21:07:55.287277   73707 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-291511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:07:55.287384   73707 ssh_runner.go:195] Run: crio config
	I0930 21:07:55.200512   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-scheduler-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.200559   73375 pod_ready.go:82] duration metric: took 400.11341ms for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:55.200569   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-scheduler-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.200577   73375 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:55.601008   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.601042   73375 pod_ready.go:82] duration metric: took 400.453601ms for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:55.601055   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.601065   73375 pod_ready.go:39] duration metric: took 1.281429189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:07:55.601086   73375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:07:55.617767   73375 ops.go:34] apiserver oom_adj: -16
	I0930 21:07:55.617791   73375 kubeadm.go:597] duration metric: took 9.314187459s to restartPrimaryControlPlane
	I0930 21:07:55.617803   73375 kubeadm.go:394] duration metric: took 9.369220314s to StartCluster
	I0930 21:07:55.617824   73375 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.617913   73375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:07:55.619455   73375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.619760   73375 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:07:55.619842   73375 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:07:55.619959   73375 addons.go:69] Setting storage-provisioner=true in profile "no-preload-997816"
	I0930 21:07:55.619984   73375 addons.go:234] Setting addon storage-provisioner=true in "no-preload-997816"
	I0930 21:07:55.619974   73375 addons.go:69] Setting default-storageclass=true in profile "no-preload-997816"
	I0930 21:07:55.620003   73375 addons.go:69] Setting metrics-server=true in profile "no-preload-997816"
	I0930 21:07:55.620009   73375 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-997816"
	I0930 21:07:55.620020   73375 addons.go:234] Setting addon metrics-server=true in "no-preload-997816"
	W0930 21:07:55.620031   73375 addons.go:243] addon metrics-server should already be in state true
	I0930 21:07:55.620050   73375 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:55.620061   73375 host.go:66] Checking if "no-preload-997816" exists ...
	W0930 21:07:55.619994   73375 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:07:55.620124   73375 host.go:66] Checking if "no-preload-997816" exists ...
	I0930 21:07:55.620420   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620459   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.620494   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620535   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.620593   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620634   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.621682   73375 out.go:177] * Verifying Kubernetes components...
	I0930 21:07:55.623102   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:55.643690   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0930 21:07:55.643895   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35545
	I0930 21:07:55.644411   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.644553   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.644968   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.644981   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.645072   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.645078   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.645314   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.645502   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.645732   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.645777   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.645812   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.649244   73375 addons.go:234] Setting addon default-storageclass=true in "no-preload-997816"
	W0930 21:07:55.649262   73375 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:07:55.649283   73375 host.go:66] Checking if "no-preload-997816" exists ...
	I0930 21:07:55.649524   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.649548   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.671077   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42635
	I0930 21:07:55.671558   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.672193   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.672212   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.672505   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45163
	I0930 21:07:55.672736   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.672808   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I0930 21:07:55.673354   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.673396   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.673920   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.673926   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.674528   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.674545   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.674974   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.675624   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.675658   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.676078   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.676095   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.676547   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.676724   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.679115   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.681410   73375 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:55.688953   73375 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:07:55.688981   73375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:07:55.689015   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.693338   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.693996   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.694023   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.694212   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.694344   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.694444   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.694545   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.696037   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0930 21:07:55.696535   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.697185   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.697207   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.697567   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.697772   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.699797   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.700998   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0930 21:07:55.701429   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.702094   73375 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:07:52.909622   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.910169   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.910202   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.910132   74917 retry.go:31] will retry after 605.019848ms: waiting for machine to come up
	I0930 21:07:53.517276   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:53.517911   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:53.517943   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:53.517858   74917 retry.go:31] will retry after 856.018614ms: waiting for machine to come up
	I0930 21:07:54.376343   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:54.376838   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:54.376862   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:54.376794   74917 retry.go:31] will retry after 740.749778ms: waiting for machine to come up
	I0930 21:07:55.119090   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:55.119631   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:55.119660   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:55.119583   74917 retry.go:31] will retry after 1.444139076s: waiting for machine to come up
	I0930 21:07:56.566261   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:56.566744   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:56.566771   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:56.566695   74917 retry.go:31] will retry after 1.681362023s: waiting for machine to come up
	I0930 21:07:55.703687   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:07:55.703709   73375 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:07:55.703736   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.703788   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.703816   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.704295   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.704553   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.707029   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.707365   73375 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:07:55.707385   73375 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:07:55.707408   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.708091   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.708606   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.708629   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.709024   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.709237   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.709388   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.709573   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.711123   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.711607   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.711631   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.711987   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.712178   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.712318   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.712469   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.888447   73375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:55.912060   73375 node_ready.go:35] waiting up to 6m0s for node "no-preload-997816" to be "Ready" ...
	I0930 21:07:56.010903   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:07:56.012576   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:07:56.012601   73375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:07:56.038592   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:07:56.055481   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:07:56.055513   73375 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:07:56.131820   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:07:56.131844   73375 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:07:56.213605   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:07:57.078385   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.067447636s)
	I0930 21:07:57.078439   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.078451   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.078770   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.078823   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:57.078836   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.078845   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.078793   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:57.079118   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:57.079149   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.079157   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:57.672706   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.672737   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.673053   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.673072   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:58.301165   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:59.072488   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.858837368s)
	I0930 21:07:59.072565   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.072582   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.072921   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.072986   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073029   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073038   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.073221   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.034599023s)
	I0930 21:07:59.073271   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073344   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.073383   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073397   73375 addons.go:475] Verifying addon metrics-server=true in "no-preload-997816"
	I0930 21:07:59.073347   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.073754   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:59.073804   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.073819   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073834   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073846   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.075323   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:59.075329   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.075353   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.077687   73375 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0930 21:07:59.079278   73375 addons.go:510] duration metric: took 3.459453938s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0930 21:07:55.346656   73707 cni.go:84] Creating CNI manager for ""
	I0930 21:07:55.346679   73707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:55.346688   73707 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:07:55.346718   73707 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-291511 NodeName:default-k8s-diff-port-291511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:07:55.346847   73707 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-291511"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:07:55.346903   73707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:07:55.356645   73707 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:07:55.356708   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:07:55.366457   73707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0930 21:07:55.384639   73707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:07:55.403208   73707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0930 21:07:55.421878   73707 ssh_runner.go:195] Run: grep 192.168.50.2	control-plane.minikube.internal$ /etc/hosts
	I0930 21:07:55.425803   73707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:55.439370   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:55.553575   73707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:55.570754   73707 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511 for IP: 192.168.50.2
	I0930 21:07:55.570787   73707 certs.go:194] generating shared ca certs ...
	I0930 21:07:55.570808   73707 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.571011   73707 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:07:55.571067   73707 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:07:55.571083   73707 certs.go:256] generating profile certs ...
	I0930 21:07:55.571178   73707 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/client.key
	I0930 21:07:55.571270   73707 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.key.2e3224d9
	I0930 21:07:55.571326   73707 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.key
	I0930 21:07:55.571464   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:07:55.571510   73707 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:07:55.571522   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:07:55.571587   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:07:55.571627   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:07:55.571655   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:07:55.571719   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:55.572367   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:07:55.606278   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:07:55.645629   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:07:55.690514   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:07:55.737445   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 21:07:55.773656   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 21:07:55.804015   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:07:55.830210   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:07:55.857601   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:07:55.887765   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:07:55.922053   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:07:55.951040   73707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:07:55.969579   73707 ssh_runner.go:195] Run: openssl version
	I0930 21:07:55.975576   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:07:55.987255   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:07:55.993657   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:07:55.993723   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:07:56.001878   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:07:56.017528   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:07:56.030398   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.035552   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.035625   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.043878   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:07:56.055384   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:07:56.066808   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.073099   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.073164   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.081343   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:07:56.096669   73707 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:07:56.102635   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:07:56.110805   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:07:56.118533   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:07:56.125800   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:07:56.133985   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:07:56.142109   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:07:56.150433   73707 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-291511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:07:56.150538   73707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:07:56.150608   73707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:56.197936   73707 cri.go:89] found id: ""
	I0930 21:07:56.198016   73707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:07:56.208133   73707 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:07:56.208155   73707 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:07:56.208204   73707 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:07:56.218880   73707 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:07:56.220322   73707 kubeconfig.go:125] found "default-k8s-diff-port-291511" server: "https://192.168.50.2:8444"
	I0930 21:07:56.223557   73707 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:07:56.233844   73707 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.2
	I0930 21:07:56.233876   73707 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:07:56.233889   73707 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:07:56.233970   73707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:56.280042   73707 cri.go:89] found id: ""
	I0930 21:07:56.280129   73707 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:07:56.304291   73707 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:07:56.317987   73707 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:07:56.318012   73707 kubeadm.go:157] found existing configuration files:
	
	I0930 21:07:56.318076   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0930 21:07:56.331377   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:07:56.331448   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:07:56.342380   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0930 21:07:56.354949   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:07:56.355030   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:07:56.368385   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0930 21:07:56.378798   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:07:56.378883   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:07:56.390167   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0930 21:07:56.400338   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:07:56.400413   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:07:56.410735   73707 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:07:56.426910   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:56.557126   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.682738   73707 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125574645s)
	I0930 21:07:57.682777   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.908684   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.983925   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:58.088822   73707 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:07:58.088930   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:58.589565   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:59.089483   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:59.110240   73707 api_server.go:72] duration metric: took 1.021416929s to wait for apiserver process to appear ...
	I0930 21:07:59.110279   73707 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:07:59.110328   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:07:59.110843   73707 api_server.go:269] stopped: https://192.168.50.2:8444/healthz: Get "https://192.168.50.2:8444/healthz": dial tcp 192.168.50.2:8444: connect: connection refused
	I0930 21:07:59.611045   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:07:58.250468   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:58.251041   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:58.251062   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:58.250979   74917 retry.go:31] will retry after 2.260492343s: waiting for machine to come up
	I0930 21:08:00.513613   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:00.514129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:00.514194   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:00.514117   74917 retry.go:31] will retry after 2.449694064s: waiting for machine to come up
	I0930 21:08:02.200888   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:02.200918   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:02.200930   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:02.240477   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:02.240513   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:02.611111   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:02.615548   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:02.615578   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:03.111216   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:03.118078   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:03.118102   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:03.610614   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:03.615203   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 200:
	ok
	I0930 21:08:03.621652   73707 api_server.go:141] control plane version: v1.31.1
	I0930 21:08:03.621680   73707 api_server.go:131] duration metric: took 4.511393989s to wait for apiserver health ...
	I0930 21:08:03.621689   73707 cni.go:84] Creating CNI manager for ""
	I0930 21:08:03.621694   73707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:03.624026   73707 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:08:00.416356   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:08:02.416469   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:08:02.916643   73375 node_ready.go:49] node "no-preload-997816" has status "Ready":"True"
	I0930 21:08:02.916668   73375 node_ready.go:38] duration metric: took 7.004576501s for node "no-preload-997816" to be "Ready" ...
	I0930 21:08:02.916679   73375 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:02.922833   73375 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:02.928873   73375 pod_ready.go:93] pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:02.928895   73375 pod_ready.go:82] duration metric: took 6.034388ms for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:02.928904   73375 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.934668   73375 pod_ready.go:103] pod "etcd-no-preload-997816" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:03.625416   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:08:03.640241   73707 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:08:03.664231   73707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:08:03.679372   73707 system_pods.go:59] 8 kube-system pods found
	I0930 21:08:03.679409   73707 system_pods.go:61] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:08:03.679425   73707 system_pods.go:61] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:08:03.679435   73707 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:08:03.679447   73707 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:08:03.679456   73707 system_pods.go:61] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 21:08:03.679466   73707 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:08:03.679472   73707 system_pods.go:61] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:08:03.679482   73707 system_pods.go:61] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 21:08:03.679490   73707 system_pods.go:74] duration metric: took 15.234407ms to wait for pod list to return data ...
	I0930 21:08:03.679509   73707 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:08:03.698332   73707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:08:03.698363   73707 node_conditions.go:123] node cpu capacity is 2
	I0930 21:08:03.698374   73707 node_conditions.go:105] duration metric: took 18.857709ms to run NodePressure ...
	I0930 21:08:03.698394   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:03.968643   73707 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:08:03.974075   73707 kubeadm.go:739] kubelet initialised
	I0930 21:08:03.974098   73707 kubeadm.go:740] duration metric: took 5.424573ms waiting for restarted kubelet to initialise ...
	I0930 21:08:03.974105   73707 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:03.982157   73707 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:03.989298   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.989329   73707 pod_ready.go:82] duration metric: took 7.140381ms for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:03.989338   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.989345   73707 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:03.995739   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.995773   73707 pod_ready.go:82] duration metric: took 6.418854ms for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:03.995787   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.995797   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.002071   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.002093   73707 pod_ready.go:82] duration metric: took 6.287919ms for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.002104   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.002110   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.071732   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.071760   73707 pod_ready.go:82] duration metric: took 69.643681ms for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.071771   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.071777   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.468580   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-proxy-kwp22" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.468605   73707 pod_ready.go:82] duration metric: took 396.820558ms for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.468614   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-proxy-kwp22" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.468620   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.868042   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.868067   73707 pod_ready.go:82] duration metric: took 399.438278ms for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.868078   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.868085   73707 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.267893   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:05.267925   73707 pod_ready.go:82] duration metric: took 399.831615ms for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:05.267937   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:05.267945   73707 pod_ready.go:39] duration metric: took 1.293832472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:05.267960   73707 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:08:05.282162   73707 ops.go:34] apiserver oom_adj: -16
	I0930 21:08:05.282188   73707 kubeadm.go:597] duration metric: took 9.074027172s to restartPrimaryControlPlane
	I0930 21:08:05.282199   73707 kubeadm.go:394] duration metric: took 9.131777336s to StartCluster
	I0930 21:08:05.282216   73707 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:05.282338   73707 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:08:05.283862   73707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:05.284135   73707 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:08:05.284201   73707 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:08:05.284287   73707 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284305   73707 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.284313   73707 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:08:05.284340   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.284339   73707 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284385   73707 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:08:05.284399   73707 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-291511"
	I0930 21:08:05.284359   73707 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284432   73707 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.284448   73707 addons.go:243] addon metrics-server should already be in state true
	I0930 21:08:05.284486   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.284739   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284760   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284784   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.284794   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.284890   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284931   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.286020   73707 out.go:177] * Verifying Kubernetes components...
	I0930 21:08:05.287268   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:05.302045   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0930 21:08:05.302587   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.303190   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.303219   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.303631   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.304213   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.304258   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.304484   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0930 21:08:05.304676   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0930 21:08:05.304884   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.305175   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.305353   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.305377   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.305642   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.305660   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.305724   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.305933   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.306016   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.306580   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.306623   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.309757   73707 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.309778   73707 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:08:05.309805   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.310163   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.310208   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.320335   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0930 21:08:05.320928   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.321496   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.321520   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.321922   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.322082   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.324111   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.325867   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0930 21:08:05.325879   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0930 21:08:05.326252   73707 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:08:05.326337   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.326280   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.326847   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.326862   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.326982   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.326999   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.327239   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.327313   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.327467   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:08:05.327485   73707 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:08:05.327507   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.327597   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.327778   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.327806   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.329862   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.331454   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.331654   73707 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:05.331959   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.331996   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.332184   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.332355   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.332577   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.332699   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.332956   73707 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:08:05.332972   73707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:08:05.332990   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.336234   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.336634   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.336661   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.336885   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.337134   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.337271   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.337447   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.345334   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34613
	I0930 21:08:05.345908   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.346393   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.346424   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.346749   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.346887   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.348836   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.349033   73707 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:08:05.349048   73707 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:08:05.349067   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.351835   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.352222   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.352277   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.352401   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.352644   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.352786   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.352886   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.475274   73707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:05.496035   73707 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-291511" to be "Ready" ...
	I0930 21:08:05.564715   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:08:05.574981   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:08:05.575006   73707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:08:05.613799   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:08:05.613822   73707 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:08:05.618503   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:08:05.689563   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:08:05.689588   73707 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:08:05.769327   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:08:06.831657   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266911261s)
	I0930 21:08:06.831717   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.831727   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.831735   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.213199657s)
	I0930 21:08:06.831780   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.831797   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832054   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832071   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832079   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.832086   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832146   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832164   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832182   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832195   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.832203   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832291   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832305   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832316   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832477   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832483   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832512   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.838509   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.838534   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.838786   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.838801   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.838806   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.956747   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.187373699s)
	I0930 21:08:06.956803   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.956819   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.957097   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.958516   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.958531   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.958542   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.958548   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.958842   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.958863   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.958873   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.958875   73707 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-291511"
	I0930 21:08:06.961299   73707 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0930 21:08:02.965767   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:02.966135   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:02.966157   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:02.966086   74917 retry.go:31] will retry after 2.951226221s: waiting for machine to come up
	I0930 21:08:05.919389   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:05.919894   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:05.919937   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:05.919827   74917 retry.go:31] will retry after 2.747969391s: waiting for machine to come up
	I0930 21:08:09.916514   73256 start.go:364] duration metric: took 52.875691449s to acquireMachinesLock for "embed-certs-256103"
	I0930 21:08:09.916583   73256 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:08:09.916592   73256 fix.go:54] fixHost starting: 
	I0930 21:08:09.916972   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:09.917000   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:09.935009   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0930 21:08:09.935493   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:09.936052   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:08:09.936073   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:09.936443   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:09.936617   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:09.936762   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:08:09.938608   73256 fix.go:112] recreateIfNeeded on embed-certs-256103: state=Stopped err=<nil>
	I0930 21:08:09.938639   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	W0930 21:08:09.938811   73256 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:08:09.940789   73256 out.go:177] * Restarting existing kvm2 VM for "embed-certs-256103" ...
	I0930 21:08:05.936626   73375 pod_ready.go:93] pod "etcd-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:05.936660   73375 pod_ready.go:82] duration metric: took 3.007747597s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.936674   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.942154   73375 pod_ready.go:93] pod "kube-apiserver-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:05.942196   73375 pod_ready.go:82] duration metric: took 5.502965ms for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.942209   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.949366   73375 pod_ready.go:93] pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.949402   73375 pod_ready.go:82] duration metric: took 1.007183809s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.949413   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.955060   73375 pod_ready.go:93] pod "kube-proxy-klcv8" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.955088   73375 pod_ready.go:82] duration metric: took 5.667172ms for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.955100   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.961684   73375 pod_ready.go:93] pod "kube-scheduler-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.961706   73375 pod_ready.go:82] duration metric: took 6.597856ms for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.961718   73375 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:08.967525   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:06.962594   73707 addons.go:510] duration metric: took 1.678396512s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0930 21:08:07.499805   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:09.500771   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:08.671179   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.671686   73900 main.go:141] libmachine: (old-k8s-version-621406) Found IP for machine: 192.168.72.159
	I0930 21:08:08.671711   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserving static IP address...
	I0930 21:08:08.671729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has current primary IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.672178   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.672220   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | skip adding static IP to network mk-old-k8s-version-621406 - found existing host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"}
	I0930 21:08:08.672231   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserved static IP address: 192.168.72.159
	I0930 21:08:08.672246   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting for SSH to be available...
	I0930 21:08:08.672254   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Getting to WaitForSSH function...
	I0930 21:08:08.674566   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.674931   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.674969   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.675128   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH client type: external
	I0930 21:08:08.675170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa (-rw-------)
	I0930 21:08:08.675212   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:08:08.675229   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | About to run SSH command:
	I0930 21:08:08.675244   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | exit 0
	I0930 21:08:08.799368   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | SSH cmd err, output: <nil>: 
	I0930 21:08:08.799751   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetConfigRaw
	I0930 21:08:08.800421   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:08.803151   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803596   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.803620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803922   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:08:08.804195   73900 machine.go:93] provisionDockerMachine start ...
	I0930 21:08:08.804246   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:08.804502   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.806822   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807240   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.807284   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807521   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.807735   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.807890   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.808077   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.808239   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.808480   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.808493   73900 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:08:08.912058   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:08:08.912135   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912407   73900 buildroot.go:166] provisioning hostname "old-k8s-version-621406"
	I0930 21:08:08.912432   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912662   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.915366   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.915750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915892   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.916107   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916330   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916492   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.916673   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.916932   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.916957   73900 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-621406 && echo "old-k8s-version-621406" | sudo tee /etc/hostname
	I0930 21:08:09.034260   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-621406
	
	I0930 21:08:09.034296   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.037149   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037509   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.037538   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037799   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.037986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038163   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038327   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.038473   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.038695   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.038714   73900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-621406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-621406/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-621406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:08:09.152190   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:08:09.152228   73900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:08:09.152255   73900 buildroot.go:174] setting up certificates
	I0930 21:08:09.152275   73900 provision.go:84] configureAuth start
	I0930 21:08:09.152288   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:09.152577   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.155203   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155589   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.155620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155783   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.157964   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158362   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.158392   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158520   73900 provision.go:143] copyHostCerts
	I0930 21:08:09.158592   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:08:09.158605   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:08:09.158704   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:08:09.158851   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:08:09.158864   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:08:09.158895   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:08:09.158970   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:08:09.158977   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:08:09.158996   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:08:09.159054   73900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-621406 san=[127.0.0.1 192.168.72.159 localhost minikube old-k8s-version-621406]
	I0930 21:08:09.301267   73900 provision.go:177] copyRemoteCerts
	I0930 21:08:09.301322   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:08:09.301349   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.304344   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304766   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.304796   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304998   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.305187   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.305321   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.305439   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.390851   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0930 21:08:09.415712   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 21:08:09.439567   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:08:09.463427   73900 provision.go:87] duration metric: took 311.139024ms to configureAuth
	I0930 21:08:09.463459   73900 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:08:09.463713   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:08:09.463809   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.466757   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.467160   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467326   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.467513   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467694   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467843   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.468004   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.468175   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.468190   73900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:08:09.684657   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:08:09.684684   73900 machine.go:96] duration metric: took 880.473418ms to provisionDockerMachine
	I0930 21:08:09.684698   73900 start.go:293] postStartSetup for "old-k8s-version-621406" (driver="kvm2")
	I0930 21:08:09.684709   73900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:08:09.684730   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.685075   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:08:09.685114   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.688051   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688517   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.688542   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688725   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.688928   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.689070   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.689265   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.770572   73900 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:08:09.775149   73900 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:08:09.775181   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:08:09.775268   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:08:09.775364   73900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:08:09.775453   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:08:09.784753   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:09.807989   73900 start.go:296] duration metric: took 123.276522ms for postStartSetup
	I0930 21:08:09.808033   73900 fix.go:56] duration metric: took 19.918922935s for fixHost
	I0930 21:08:09.808053   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.811242   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811656   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.811692   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811852   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.812064   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812239   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812380   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.812522   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.812704   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.812719   73900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:08:09.916349   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730489.889323893
	
	I0930 21:08:09.916376   73900 fix.go:216] guest clock: 1727730489.889323893
	I0930 21:08:09.916384   73900 fix.go:229] Guest: 2024-09-30 21:08:09.889323893 +0000 UTC Remote: 2024-09-30 21:08:09.808037625 +0000 UTC m=+267.093327666 (delta=81.286268ms)
	I0930 21:08:09.916403   73900 fix.go:200] guest clock delta is within tolerance: 81.286268ms
	I0930 21:08:09.916408   73900 start.go:83] releasing machines lock for "old-k8s-version-621406", held for 20.027328296s
	I0930 21:08:09.916440   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.916766   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.919729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920070   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.920105   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920238   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.920831   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921050   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921182   73900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:08:09.921235   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.921328   73900 ssh_runner.go:195] Run: cat /version.json
	I0930 21:08:09.921351   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.924258   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924650   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.924695   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924805   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.924986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.925176   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925206   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.925341   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.925405   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.925534   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925698   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925829   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:10.043500   73900 ssh_runner.go:195] Run: systemctl --version
	I0930 21:08:10.051029   73900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:08:10.199844   73900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:08:10.206433   73900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:08:10.206519   73900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:08:10.223346   73900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:08:10.223375   73900 start.go:495] detecting cgroup driver to use...
	I0930 21:08:10.223449   73900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:08:10.241056   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:08:10.257197   73900 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:08:10.257261   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:08:10.271847   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:08:10.287465   73900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:08:10.419248   73900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:08:10.583440   73900 docker.go:233] disabling docker service ...
	I0930 21:08:10.583518   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:08:10.599561   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:08:10.613321   73900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:08:10.763071   73900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:08:10.891222   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:08:10.906985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:08:10.927838   73900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0930 21:08:10.927911   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.940002   73900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:08:10.940084   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.953143   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.965922   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.985782   73900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:08:11.001825   73900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:08:11.015777   73900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:08:11.015835   73900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:08:11.034821   73900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:08:11.049855   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:11.203755   73900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:08:11.312949   73900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:08:11.313060   73900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:08:11.319280   73900 start.go:563] Will wait 60s for crictl version
	I0930 21:08:11.319355   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:11.323826   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:08:11.374934   73900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:08:11.375023   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.415466   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.449622   73900 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0930 21:08:11.450773   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:11.454019   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454504   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:11.454534   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454807   73900 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0930 21:08:11.459034   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:11.473162   73900 kubeadm.go:883] updating cluster {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:08:11.473294   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:08:11.473367   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:11.518200   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:11.518275   73900 ssh_runner.go:195] Run: which lz4
	I0930 21:08:11.522442   73900 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:08:11.526704   73900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:08:11.526752   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0930 21:08:09.942356   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Start
	I0930 21:08:09.942591   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring networks are active...
	I0930 21:08:09.943619   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring network default is active
	I0930 21:08:09.944145   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring network mk-embed-certs-256103 is active
	I0930 21:08:09.944659   73256 main.go:141] libmachine: (embed-certs-256103) Getting domain xml...
	I0930 21:08:09.945567   73256 main.go:141] libmachine: (embed-certs-256103) Creating domain...
	I0930 21:08:11.376075   73256 main.go:141] libmachine: (embed-certs-256103) Waiting to get IP...
	I0930 21:08:11.377049   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.377588   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.377687   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.377579   75193 retry.go:31] will retry after 219.057799ms: waiting for machine to come up
	I0930 21:08:11.598062   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.598531   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.598568   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.598491   75193 retry.go:31] will retry after 288.150233ms: waiting for machine to come up
	I0930 21:08:11.887894   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.888719   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.888749   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.888678   75193 retry.go:31] will retry after 422.70153ms: waiting for machine to come up
	I0930 21:08:12.313280   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:12.313761   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:12.313790   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:12.313728   75193 retry.go:31] will retry after 403.507934ms: waiting for machine to come up
	I0930 21:08:12.719305   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:12.719705   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:12.719740   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:12.719683   75193 retry.go:31] will retry after 616.261723ms: waiting for machine to come up
	I0930 21:08:13.337223   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:13.337759   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:13.337809   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:13.337727   75193 retry.go:31] will retry after 715.496762ms: waiting for machine to come up
	I0930 21:08:14.054455   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:14.055118   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:14.055155   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:14.055041   75193 retry.go:31] will retry after 1.12512788s: waiting for machine to come up
	I0930 21:08:10.970621   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:13.468795   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:11.501276   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:12.501748   73707 node_ready.go:49] node "default-k8s-diff-port-291511" has status "Ready":"True"
	I0930 21:08:12.501784   73707 node_ready.go:38] duration metric: took 7.005705696s for node "default-k8s-diff-port-291511" to be "Ready" ...
	I0930 21:08:12.501797   73707 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:12.510080   73707 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:12.518496   73707 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:12.518522   73707 pod_ready.go:82] duration metric: took 8.414761ms for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:12.518535   73707 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.526615   73707 pod_ready.go:93] pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:14.526653   73707 pod_ready.go:82] duration metric: took 2.00810944s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.526666   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.533536   73707 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:14.533574   73707 pod_ready.go:82] duration metric: took 6.898769ms for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.533596   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.043003   73707 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.043034   73707 pod_ready.go:82] duration metric: took 509.429109ms for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.043048   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.049645   73707 pod_ready.go:93] pod "kube-proxy-kwp22" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.049676   73707 pod_ready.go:82] duration metric: took 6.618441ms for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.049688   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:13.134916   73900 crio.go:462] duration metric: took 1.612498859s to copy over tarball
	I0930 21:08:13.135038   73900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:08:16.170053   73900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.034985922s)
	I0930 21:08:16.170080   73900 crio.go:469] duration metric: took 3.035125251s to extract the tarball
	I0930 21:08:16.170088   73900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:08:16.213559   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:16.249853   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:16.249876   73900 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 21:08:16.249943   73900 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.249970   73900 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.249987   73900 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.250030   73900 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0930 21:08:16.250031   73900 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.250047   73900 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.250049   73900 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.250083   73900 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0930 21:08:16.251771   73900 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.251768   73900 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.251832   73900 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.251854   73900 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251891   73900 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.252031   73900 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.456847   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.468006   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0930 21:08:16.516253   73900 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0930 21:08:16.516294   73900 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.516336   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.524699   73900 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0930 21:08:16.524743   73900 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0930 21:08:16.524787   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.525738   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.529669   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.561946   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.569090   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.570589   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.571007   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.581971   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.587609   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.630323   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.711058   73900 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0930 21:08:16.711124   73900 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.711190   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.749473   73900 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0930 21:08:16.749521   73900 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.749585   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.769974   73900 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0930 21:08:16.770016   73900 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.770050   73900 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0930 21:08:16.770075   73900 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0930 21:08:16.770087   73900 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.770104   73900 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.770142   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770160   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.770064   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770144   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.788241   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.788292   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.788294   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.788339   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.847727   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0930 21:08:16.847798   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.847894   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.938964   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.939000   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.939053   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0930 21:08:16.939090   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.965556   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.965620   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.020497   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:17.074893   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:17.074950   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.090489   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:17.174117   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0930 21:08:17.174183   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0930 21:08:17.185553   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0930 21:08:17.185619   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0930 21:08:17.506064   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:17.650598   73900 cache_images.go:92] duration metric: took 1.400704992s to LoadCachedImages
	W0930 21:08:17.650695   73900 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0930 21:08:17.650710   73900 kubeadm.go:934] updating node { 192.168.72.159 8443 v1.20.0 crio true true} ...
	I0930 21:08:17.650834   73900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-621406 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:08:17.650922   73900 ssh_runner.go:195] Run: crio config
	I0930 21:08:17.710096   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:08:17.710124   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:17.710139   73900 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:08:17.710164   73900 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-621406 NodeName:old-k8s-version-621406 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0930 21:08:17.710349   73900 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-621406"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:08:17.710425   73900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0930 21:08:17.721028   73900 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:08:17.721111   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:08:17.731462   73900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0930 21:08:17.749715   73900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:08:15.182186   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:15.182722   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:15.182751   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:15.182673   75193 retry.go:31] will retry after 1.385891549s: waiting for machine to come up
	I0930 21:08:16.569882   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:16.570365   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:16.570386   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:16.570309   75193 retry.go:31] will retry after 1.417579481s: waiting for machine to come up
	I0930 21:08:17.989161   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:17.989876   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:17.989905   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:17.989818   75193 retry.go:31] will retry after 1.981651916s: waiting for machine to come up
	I0930 21:08:15.471221   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:17.969140   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:19.969688   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:15.300639   73707 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.300666   73707 pod_ready.go:82] duration metric: took 250.968899ms for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.300679   73707 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:17.349449   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:19.809813   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:17.767565   73900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0930 21:08:17.786411   73900 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0930 21:08:17.790338   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:17.803957   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:17.948898   73900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:17.969102   73900 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406 for IP: 192.168.72.159
	I0930 21:08:17.969133   73900 certs.go:194] generating shared ca certs ...
	I0930 21:08:17.969150   73900 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:17.969338   73900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:08:17.969387   73900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:08:17.969400   73900 certs.go:256] generating profile certs ...
	I0930 21:08:17.969543   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/client.key
	I0930 21:08:17.969621   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key.f3dc5056
	I0930 21:08:17.969674   73900 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key
	I0930 21:08:17.969833   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:08:17.969875   73900 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:08:17.969886   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:08:17.969926   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:08:17.969961   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:08:17.969999   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:08:17.970055   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:17.970794   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:08:18.007954   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:08:18.041538   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:08:18.077886   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:08:18.118644   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0930 21:08:18.151418   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:08:18.199572   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:08:18.235795   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:08:18.272729   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:08:18.298727   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:08:18.324074   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:08:18.351209   73900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:08:18.372245   73900 ssh_runner.go:195] Run: openssl version
	I0930 21:08:18.380047   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:08:18.395332   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401407   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401479   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.407744   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:08:18.422801   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:08:18.437946   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443864   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443938   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.451554   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:08:18.466856   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:08:18.479324   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484321   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484383   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.490341   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:08:18.503117   73900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:08:18.507986   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:08:18.514974   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:08:18.522140   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:08:18.529366   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:08:18.536056   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:08:18.542787   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:08:18.550311   73900 kubeadm.go:392] StartCluster: {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:08:18.550431   73900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:08:18.550498   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.593041   73900 cri.go:89] found id: ""
	I0930 21:08:18.593116   73900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:08:18.603410   73900 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:08:18.603432   73900 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:08:18.603479   73900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:08:18.614635   73900 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:08:18.615758   73900 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-621406" does not appear in /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:08:18.616488   73900 kubeconfig.go:62] /home/jenkins/minikube-integration/19736-7672/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-621406" cluster setting kubeconfig missing "old-k8s-version-621406" context setting]
	I0930 21:08:18.617394   73900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:18.644144   73900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:08:18.655764   73900 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.159
	I0930 21:08:18.655806   73900 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:08:18.655819   73900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:08:18.655877   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.699283   73900 cri.go:89] found id: ""
	I0930 21:08:18.699376   73900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:08:18.715248   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:08:18.724905   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:08:18.724945   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:08:18.724990   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:08:18.735611   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:08:18.735682   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:08:18.745604   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:08:18.755199   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:08:18.755261   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:08:18.765450   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.775187   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:08:18.775268   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.788080   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:08:18.800668   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:08:18.800727   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:08:18.814084   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:08:18.823785   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:18.961698   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.495418   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.713653   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.812667   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.921314   73900 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:08:19.921414   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.422349   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.922222   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.422364   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.921493   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:22.421640   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:19.973478   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:19.973916   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:19.973946   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:19.973868   75193 retry.go:31] will retry after 2.33355272s: waiting for machine to come up
	I0930 21:08:22.308828   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:22.309471   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:22.309498   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:22.309367   75193 retry.go:31] will retry after 3.484225075s: waiting for machine to come up
	I0930 21:08:21.970954   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:24.467778   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:22.310464   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:24.806425   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:22.922418   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.421851   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.921502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.422346   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.922000   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.422290   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.922213   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.422100   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.922239   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:27.421729   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.795265   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:25.795755   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:25.795781   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:25.795707   75193 retry.go:31] will retry after 2.983975719s: waiting for machine to come up
	I0930 21:08:28.780767   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.781201   73256 main.go:141] libmachine: (embed-certs-256103) Found IP for machine: 192.168.39.90
	I0930 21:08:28.781223   73256 main.go:141] libmachine: (embed-certs-256103) Reserving static IP address...
	I0930 21:08:28.781237   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has current primary IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.781655   73256 main.go:141] libmachine: (embed-certs-256103) Reserved static IP address: 192.168.39.90
	I0930 21:08:28.781679   73256 main.go:141] libmachine: (embed-certs-256103) Waiting for SSH to be available...
	I0930 21:08:28.781697   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "embed-certs-256103", mac: "52:54:00:7a:01:01", ip: "192.168.39.90"} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.781724   73256 main.go:141] libmachine: (embed-certs-256103) DBG | skip adding static IP to network mk-embed-certs-256103 - found existing host DHCP lease matching {name: "embed-certs-256103", mac: "52:54:00:7a:01:01", ip: "192.168.39.90"}
	I0930 21:08:28.781735   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Getting to WaitForSSH function...
	I0930 21:08:28.784310   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.784703   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.784737   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.784861   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Using SSH client type: external
	I0930 21:08:28.784899   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa (-rw-------)
	I0930 21:08:28.784933   73256 main.go:141] libmachine: (embed-certs-256103) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:08:28.784953   73256 main.go:141] libmachine: (embed-certs-256103) DBG | About to run SSH command:
	I0930 21:08:28.784970   73256 main.go:141] libmachine: (embed-certs-256103) DBG | exit 0
	I0930 21:08:28.911300   73256 main.go:141] libmachine: (embed-certs-256103) DBG | SSH cmd err, output: <nil>: 
	I0930 21:08:28.911716   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetConfigRaw
	I0930 21:08:28.912335   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:28.914861   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.915283   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.915304   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.915620   73256 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/config.json ...
	I0930 21:08:28.915874   73256 machine.go:93] provisionDockerMachine start ...
	I0930 21:08:28.915902   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:28.916117   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:28.918357   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.918661   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.918696   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.918813   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:28.918992   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:28.919143   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:28.919296   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:28.919472   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:28.919680   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:28.919691   73256 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:08:29.032537   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:08:29.032579   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.032830   73256 buildroot.go:166] provisioning hostname "embed-certs-256103"
	I0930 21:08:29.032857   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.033039   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.035951   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.036403   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.036435   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.036598   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.036795   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.037002   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.037175   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.037339   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.037538   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.037556   73256 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-256103 && echo "embed-certs-256103" | sudo tee /etc/hostname
	I0930 21:08:29.163250   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-256103
	
	I0930 21:08:29.163278   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.165937   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.166260   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.166296   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.166529   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.166722   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.166913   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.167055   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.167223   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.167454   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.167477   73256 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-256103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-256103/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-256103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:08:29.288197   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:08:29.288236   73256 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:08:29.288292   73256 buildroot.go:174] setting up certificates
	I0930 21:08:29.288307   73256 provision.go:84] configureAuth start
	I0930 21:08:29.288322   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.288589   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:29.291598   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.292026   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.292059   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.292247   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.294760   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.295144   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.295169   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.295421   73256 provision.go:143] copyHostCerts
	I0930 21:08:29.295497   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:08:29.295510   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:08:29.295614   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:08:29.295743   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:08:29.295754   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:08:29.295782   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:08:29.295855   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:08:29.295864   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:08:29.295886   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:08:29.295948   73256 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.embed-certs-256103 san=[127.0.0.1 192.168.39.90 embed-certs-256103 localhost minikube]
	I0930 21:08:26.468058   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:28.468510   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:26.808360   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:29.307500   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:29.742069   73256 provision.go:177] copyRemoteCerts
	I0930 21:08:29.742134   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:08:29.742156   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.745411   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.745805   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.745835   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.746023   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.746215   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.746351   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.746557   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:29.833888   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:08:29.857756   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0930 21:08:29.883087   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:08:29.905795   73256 provision.go:87] duration metric: took 617.470984ms to configureAuth
	I0930 21:08:29.905831   73256 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:08:29.906028   73256 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:08:29.906098   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.908911   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.909307   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.909335   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.909524   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.909711   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.909876   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.909996   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.910157   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.910429   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.910454   73256 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:08:30.140191   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:08:30.140217   73256 machine.go:96] duration metric: took 1.224326296s to provisionDockerMachine
	I0930 21:08:30.140227   73256 start.go:293] postStartSetup for "embed-certs-256103" (driver="kvm2")
	I0930 21:08:30.140237   73256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:08:30.140252   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.140624   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:08:30.140648   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.143906   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.144300   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.144339   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.144498   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.144695   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.144846   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.145052   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.230069   73256 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:08:30.233845   73256 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:08:30.233868   73256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:08:30.233948   73256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:08:30.234050   73256 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:08:30.234168   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:08:30.243066   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:30.266197   73256 start.go:296] duration metric: took 125.955153ms for postStartSetup
	I0930 21:08:30.266234   73256 fix.go:56] duration metric: took 20.349643145s for fixHost
	I0930 21:08:30.266252   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.269025   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.269405   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.269433   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.269576   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.269784   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.269910   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.270042   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.270176   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:30.270380   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:30.270392   73256 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:08:30.380023   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730510.354607586
	
	I0930 21:08:30.380057   73256 fix.go:216] guest clock: 1727730510.354607586
	I0930 21:08:30.380067   73256 fix.go:229] Guest: 2024-09-30 21:08:30.354607586 +0000 UTC Remote: 2024-09-30 21:08:30.266237543 +0000 UTC m=+355.815232104 (delta=88.370043ms)
	I0930 21:08:30.380085   73256 fix.go:200] guest clock delta is within tolerance: 88.370043ms
	I0930 21:08:30.380091   73256 start.go:83] releasing machines lock for "embed-certs-256103", held for 20.463544222s
	I0930 21:08:30.380113   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.380429   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:30.382992   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.383349   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.383369   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.383518   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384071   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384245   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384310   73256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:08:30.384374   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.384442   73256 ssh_runner.go:195] Run: cat /version.json
	I0930 21:08:30.384464   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.387098   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387342   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387413   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.387435   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387633   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.387762   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.387783   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387828   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.387931   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.388003   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.388058   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.388159   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.388208   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.388347   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.510981   73256 ssh_runner.go:195] Run: systemctl --version
	I0930 21:08:30.517215   73256 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:08:30.663491   73256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:08:30.669568   73256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:08:30.669652   73256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:08:30.686640   73256 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:08:30.686663   73256 start.go:495] detecting cgroup driver to use...
	I0930 21:08:30.686737   73256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:08:30.703718   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:08:30.718743   73256 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:08:30.718807   73256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:08:30.733695   73256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:08:30.748690   73256 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:08:30.878084   73256 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:08:31.040955   73256 docker.go:233] disabling docker service ...
	I0930 21:08:31.041030   73256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:08:31.055212   73256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:08:31.067968   73256 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:08:31.185043   73256 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:08:31.300909   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:08:31.315167   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:08:31.333483   73256 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:08:31.333537   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.343599   73256 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:08:31.343694   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.353739   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.363993   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.375183   73256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:08:31.385478   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.395632   73256 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.412995   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.423277   73256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:08:31.433183   73256 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:08:31.433253   73256 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:08:31.446796   73256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:08:31.456912   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:31.571729   73256 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:08:31.663944   73256 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:08:31.664019   73256 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:08:31.669128   73256 start.go:563] Will wait 60s for crictl version
	I0930 21:08:31.669191   73256 ssh_runner.go:195] Run: which crictl
	I0930 21:08:31.672922   73256 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:08:31.709488   73256 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:08:31.709596   73256 ssh_runner.go:195] Run: crio --version
	I0930 21:08:31.738743   73256 ssh_runner.go:195] Run: crio --version
	I0930 21:08:31.771638   73256 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:08:27.922374   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.921870   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.421786   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.921804   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.421482   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.921969   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.422241   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.922148   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:32.421504   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.773186   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:31.776392   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:31.776770   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:31.776810   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:31.777016   73256 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 21:08:31.781212   73256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:31.793839   73256 kubeadm.go:883] updating cluster {Name:embed-certs-256103 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:08:31.793957   73256 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:08:31.794015   73256 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:31.834036   73256 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:08:31.834094   73256 ssh_runner.go:195] Run: which lz4
	I0930 21:08:31.837877   73256 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:08:31.842038   73256 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:08:31.842073   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 21:08:33.150975   73256 crio.go:462] duration metric: took 1.313131374s to copy over tarball
	I0930 21:08:33.151080   73256 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:08:30.469523   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:32.469562   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:34.969818   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:31.307560   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:33.308130   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:32.921516   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.421576   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.922082   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.421599   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.922178   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.422199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.922061   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.421860   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.921513   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.422162   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.294750   73256 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.143629494s)
	I0930 21:08:35.294785   73256 crio.go:469] duration metric: took 2.143777794s to extract the tarball
	I0930 21:08:35.294794   73256 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:08:35.340151   73256 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:35.385329   73256 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 21:08:35.385359   73256 cache_images.go:84] Images are preloaded, skipping loading
	I0930 21:08:35.385366   73256 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.31.1 crio true true} ...
	I0930 21:08:35.385463   73256 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-256103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:08:35.385536   73256 ssh_runner.go:195] Run: crio config
	I0930 21:08:35.433043   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:08:35.433072   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:35.433084   73256 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:08:35.433113   73256 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-256103 NodeName:embed-certs-256103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:08:35.433277   73256 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-256103"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:08:35.433348   73256 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:08:35.443627   73256 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:08:35.443713   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:08:35.453095   73256 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0930 21:08:35.469517   73256 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:08:35.486869   73256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0930 21:08:35.504871   73256 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0930 21:08:35.508507   73256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:35.521994   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:35.641971   73256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:35.657660   73256 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103 for IP: 192.168.39.90
	I0930 21:08:35.657686   73256 certs.go:194] generating shared ca certs ...
	I0930 21:08:35.657705   73256 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:35.657878   73256 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:08:35.657941   73256 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:08:35.657954   73256 certs.go:256] generating profile certs ...
	I0930 21:08:35.658095   73256 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/client.key
	I0930 21:08:35.658177   73256 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.key.52e83f0c
	I0930 21:08:35.658230   73256 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.key
	I0930 21:08:35.658391   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:08:35.658431   73256 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:08:35.658443   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:08:35.658476   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:08:35.658509   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:08:35.658539   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:08:35.658586   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:35.659279   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:08:35.695254   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:08:35.718948   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:08:35.742442   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:08:35.765859   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0930 21:08:35.792019   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:08:35.822081   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:08:35.845840   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:08:35.871635   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:08:35.896069   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:08:35.921595   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:08:35.946620   73256 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:08:35.963340   73256 ssh_runner.go:195] Run: openssl version
	I0930 21:08:35.970540   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:08:35.982269   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.987494   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.987646   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.994312   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:08:36.006173   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:08:36.017605   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.022126   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.022190   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.027806   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:08:36.038388   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:08:36.048818   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.053230   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.053296   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.058713   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:08:36.070806   73256 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:08:36.075521   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:08:36.081310   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:08:36.086935   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:08:36.092990   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:08:36.098783   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:08:36.104354   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:08:36.110289   73256 kubeadm.go:392] StartCluster: {Name:embed-certs-256103 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:08:36.110411   73256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:08:36.110495   73256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:36.153770   73256 cri.go:89] found id: ""
	I0930 21:08:36.153852   73256 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:08:36.164301   73256 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:08:36.164320   73256 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:08:36.164363   73256 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:08:36.173860   73256 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:08:36.174950   73256 kubeconfig.go:125] found "embed-certs-256103" server: "https://192.168.39.90:8443"
	I0930 21:08:36.177584   73256 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:08:36.186946   73256 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I0930 21:08:36.186984   73256 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:08:36.186998   73256 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:08:36.187045   73256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:36.223259   73256 cri.go:89] found id: ""
	I0930 21:08:36.223328   73256 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:08:36.239321   73256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:08:36.248508   73256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:08:36.248528   73256 kubeadm.go:157] found existing configuration files:
	
	I0930 21:08:36.248571   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:08:36.257483   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:08:36.257537   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:08:36.266792   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:08:36.275626   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:08:36.275697   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:08:36.285000   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:08:36.293923   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:08:36.293977   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:08:36.303990   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:08:36.313104   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:08:36.313158   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:08:36.322423   73256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:08:36.332005   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:36.457666   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.309316   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.533114   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.602999   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.692027   73256 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:08:37.692117   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.192813   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.692777   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.192862   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.469941   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:39.506753   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:35.311295   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:37.806923   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:39.808338   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:37.921497   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.422360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.922305   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.422480   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.922279   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.422089   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.922021   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.421727   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.921519   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:42.422193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.692193   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.192178   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.209649   73256 api_server.go:72] duration metric: took 2.517618424s to wait for apiserver process to appear ...
	I0930 21:08:40.209676   73256 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:08:40.209699   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.034828   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:43.034857   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:43.034871   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.080073   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:43.080107   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:43.210448   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.217768   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:43.217799   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:43.710066   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.722379   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:43.722428   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:44.209939   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:44.219468   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:44.219500   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:44.709767   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:44.714130   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I0930 21:08:44.720194   73256 api_server.go:141] control plane version: v1.31.1
	I0930 21:08:44.720221   73256 api_server.go:131] duration metric: took 4.510539442s to wait for apiserver health ...
	I0930 21:08:44.720230   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:08:44.720236   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:44.721740   73256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:08:41.968377   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:44.469477   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:41.808473   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:43.808575   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:42.922495   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.422250   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.922413   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.421962   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.921682   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.422144   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.922206   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.422020   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.921960   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:47.422296   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.722947   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:08:44.733426   73256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:08:44.750426   73256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:08:44.761259   73256 system_pods.go:59] 8 kube-system pods found
	I0930 21:08:44.761303   73256 system_pods.go:61] "coredns-7c65d6cfc9-h6cl2" [548e3751-edc9-4232-87c2-2e64769ba332] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:08:44.761314   73256 system_pods.go:61] "etcd-embed-certs-256103" [6eef2e96-d4bf-4dd6-bd5c-bfb05c306182] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:08:44.761326   73256 system_pods.go:61] "kube-apiserver-embed-certs-256103" [81c02a52-aca7-4b9c-b7b1-680d27f48d40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:08:44.761335   73256 system_pods.go:61] "kube-controller-manager-embed-certs-256103" [752f0966-7718-4523-8ba6-affd41bc956e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:08:44.761346   73256 system_pods.go:61] "kube-proxy-fqvg2" [284a63a1-d624-4bf3-8509-14ff0845f3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 21:08:44.761354   73256 system_pods.go:61] "kube-scheduler-embed-certs-256103" [6158a51d-82ae-490a-96d3-c0e61a3485f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:08:44.761363   73256 system_pods.go:61] "metrics-server-6867b74b74-hkp9m" [8774a772-bb72-4419-96fd-50ca5f48a5b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:08:44.761374   73256 system_pods.go:61] "storage-provisioner" [9649e71d-cd21-4846-bf66-1c5b469500ba] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 21:08:44.761385   73256 system_pods.go:74] duration metric: took 10.935916ms to wait for pod list to return data ...
	I0930 21:08:44.761397   73256 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:08:44.771745   73256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:08:44.771777   73256 node_conditions.go:123] node cpu capacity is 2
	I0930 21:08:44.771789   73256 node_conditions.go:105] duration metric: took 10.386814ms to run NodePressure ...
	I0930 21:08:44.771810   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:45.064019   73256 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:08:45.070479   73256 kubeadm.go:739] kubelet initialised
	I0930 21:08:45.070508   73256 kubeadm.go:740] duration metric: took 6.461143ms waiting for restarted kubelet to initialise ...
	I0930 21:08:45.070517   73256 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:45.074627   73256 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.080873   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.080897   73256 pod_ready.go:82] duration metric: took 6.244301ms for pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.080906   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.080912   73256 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.086787   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "etcd-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.086818   73256 pod_ready.go:82] duration metric: took 5.898265ms for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.086829   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "etcd-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.086837   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.092860   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.092892   73256 pod_ready.go:82] duration metric: took 6.044766ms for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.092904   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.092912   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.154246   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.154271   73256 pod_ready.go:82] duration metric: took 61.348653ms for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.154281   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.154287   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fqvg2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.554606   73256 pod_ready.go:93] pod "kube-proxy-fqvg2" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:45.554630   73256 pod_ready.go:82] duration metric: took 400.335084ms for pod "kube-proxy-fqvg2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.554639   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:47.559998   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:46.968101   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:48.968649   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:46.307946   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:48.806624   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:47.921903   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.422535   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.921484   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.421909   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.922117   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.421606   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.921728   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.421600   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.921716   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:52.421873   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.561176   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:51.562227   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:54.060692   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:51.467375   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:53.473247   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:50.807821   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:53.307163   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:52.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.921496   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.421866   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.921995   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.421476   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.421660   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.922489   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:57.422291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.562740   73256 pod_ready.go:93] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:54.562765   73256 pod_ready.go:82] duration metric: took 9.008120147s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:54.562775   73256 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:56.570517   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:59.070065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:55.969724   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:58.467585   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:55.807669   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:58.305837   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:57.921737   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.922007   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.422173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.921803   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.421596   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.922123   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.422186   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.921898   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:02.421894   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.070940   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:03.569053   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:00.469160   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.968692   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:00.308195   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.807474   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:04.808710   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.922329   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.421922   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.922360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.421875   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.922544   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.421939   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.921693   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.422056   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.921627   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:07.422125   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.070166   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:08.568945   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:05.467300   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.469409   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:09.968053   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.306237   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:09.306644   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.921687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.421694   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.922234   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.421817   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.921704   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.422030   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.921597   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.421700   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.922301   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:12.421567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.569444   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:13.069582   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:11.970180   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:14.469440   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:11.307287   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:13.307376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:12.922171   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.422423   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.921941   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.422494   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.922454   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.421776   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.922567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.421713   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.922449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:17.421644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.569398   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:18.069177   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:16.968663   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:19.468171   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:15.808689   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:18.307774   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:17.922098   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.922084   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.421717   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.922095   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:19.922178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:19.962975   73900 cri.go:89] found id: ""
	I0930 21:09:19.963002   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.963014   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:19.963020   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:19.963073   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:19.999741   73900 cri.go:89] found id: ""
	I0930 21:09:19.999769   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.999777   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:19.999782   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:19.999840   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:20.035818   73900 cri.go:89] found id: ""
	I0930 21:09:20.035844   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.035856   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:20.035863   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:20.035924   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:20.072005   73900 cri.go:89] found id: ""
	I0930 21:09:20.072032   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.072042   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:20.072048   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:20.072110   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:20.108229   73900 cri.go:89] found id: ""
	I0930 21:09:20.108258   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.108314   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:20.108325   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:20.108383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:20.141331   73900 cri.go:89] found id: ""
	I0930 21:09:20.141388   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.141398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:20.141406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:20.141466   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:20.175133   73900 cri.go:89] found id: ""
	I0930 21:09:20.175161   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.175169   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:20.175175   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:20.175223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:20.210529   73900 cri.go:89] found id: ""
	I0930 21:09:20.210566   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.210578   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:20.210594   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:20.210608   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:20.261055   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:20.261095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:20.274212   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:20.274239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:20.406215   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:20.406246   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:20.406282   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:20.481758   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:20.481794   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:20.069672   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:22.569421   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:21.468616   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:23.468820   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:20.309317   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:22.807149   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:24.807293   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:23.019687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:23.033394   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:23.033450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:23.078558   73900 cri.go:89] found id: ""
	I0930 21:09:23.078592   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.078604   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:23.078611   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:23.078673   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:23.117833   73900 cri.go:89] found id: ""
	I0930 21:09:23.117860   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.117868   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:23.117875   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:23.117931   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:23.157299   73900 cri.go:89] found id: ""
	I0930 21:09:23.157337   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.157359   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:23.157367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:23.157438   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:23.196545   73900 cri.go:89] found id: ""
	I0930 21:09:23.196570   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.196579   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:23.196586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:23.196644   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:23.229359   73900 cri.go:89] found id: ""
	I0930 21:09:23.229390   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.229401   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:23.229409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:23.229471   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:23.264847   73900 cri.go:89] found id: ""
	I0930 21:09:23.264881   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.264893   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:23.264900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:23.264962   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:23.298657   73900 cri.go:89] found id: ""
	I0930 21:09:23.298687   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.298695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:23.298701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:23.298750   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:23.333787   73900 cri.go:89] found id: ""
	I0930 21:09:23.333816   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.333826   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:23.333836   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:23.333851   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:23.386311   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:23.386347   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:23.400096   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:23.400129   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:23.481724   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:23.481748   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:23.481780   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:23.561080   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:23.561119   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.122460   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:26.136409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:26.136495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:26.170785   73900 cri.go:89] found id: ""
	I0930 21:09:26.170818   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.170832   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:26.170866   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:26.170945   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:26.205211   73900 cri.go:89] found id: ""
	I0930 21:09:26.205265   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.205275   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:26.205281   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:26.205335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:26.239242   73900 cri.go:89] found id: ""
	I0930 21:09:26.239276   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.239285   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:26.239291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:26.239337   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:26.272908   73900 cri.go:89] found id: ""
	I0930 21:09:26.272932   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.272940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:26.272946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:26.272993   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:26.311599   73900 cri.go:89] found id: ""
	I0930 21:09:26.311625   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.311632   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:26.311639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:26.311684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:26.345719   73900 cri.go:89] found id: ""
	I0930 21:09:26.345746   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.345754   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:26.345760   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:26.345816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:26.383513   73900 cri.go:89] found id: ""
	I0930 21:09:26.383562   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.383572   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:26.383578   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:26.383637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:26.418533   73900 cri.go:89] found id: ""
	I0930 21:09:26.418565   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.418574   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:26.418584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:26.418594   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.456635   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:26.456660   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:26.507639   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:26.507686   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:26.521069   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:26.521095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:26.594745   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:26.594768   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:26.594781   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:24.569626   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:26.570133   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.069071   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:25.968851   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:27.974091   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:26.808336   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.308328   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.180142   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:29.194730   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:29.194785   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:29.234054   73900 cri.go:89] found id: ""
	I0930 21:09:29.234094   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.234103   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:29.234109   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:29.234156   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:29.280869   73900 cri.go:89] found id: ""
	I0930 21:09:29.280896   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.280907   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:29.280914   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:29.280988   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:29.348376   73900 cri.go:89] found id: ""
	I0930 21:09:29.348406   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.348417   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:29.348424   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:29.348491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:29.404218   73900 cri.go:89] found id: ""
	I0930 21:09:29.404251   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.404261   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:29.404268   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:29.404344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:29.449029   73900 cri.go:89] found id: ""
	I0930 21:09:29.449053   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.449061   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:29.449066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:29.449127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:29.484917   73900 cri.go:89] found id: ""
	I0930 21:09:29.484939   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.484948   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:29.484954   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:29.485002   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:29.517150   73900 cri.go:89] found id: ""
	I0930 21:09:29.517177   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.517185   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:29.517191   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:29.517259   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:29.550410   73900 cri.go:89] found id: ""
	I0930 21:09:29.550443   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.550452   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:29.550461   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:29.550472   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:29.601757   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:29.601803   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:29.616266   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:29.616299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:29.686206   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:29.686228   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:29.686240   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:29.761765   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:29.761810   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:32.299199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:32.315047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:32.315125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:32.349784   73900 cri.go:89] found id: ""
	I0930 21:09:32.349810   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.349819   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:32.349824   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:32.349871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:32.385887   73900 cri.go:89] found id: ""
	I0930 21:09:32.385916   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.385927   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:32.385935   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:32.385994   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:32.421746   73900 cri.go:89] found id: ""
	I0930 21:09:32.421776   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.421789   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:32.421796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:32.421856   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:32.459361   73900 cri.go:89] found id: ""
	I0930 21:09:32.459391   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.459404   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:32.459411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:32.459470   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:32.495919   73900 cri.go:89] found id: ""
	I0930 21:09:32.495947   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.495960   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:32.495966   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:32.496025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:32.533626   73900 cri.go:89] found id: ""
	I0930 21:09:32.533652   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.533663   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:32.533670   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:32.533729   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:32.567577   73900 cri.go:89] found id: ""
	I0930 21:09:32.567610   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.567623   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:32.567630   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:32.567687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:32.604949   73900 cri.go:89] found id: ""
	I0930 21:09:32.604981   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.604991   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:32.605001   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:32.605014   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:32.656781   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:32.656822   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:32.670116   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:32.670144   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:32.736712   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:32.736736   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:32.736751   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:31.070228   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:33.569488   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:30.469162   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:32.469874   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:34.967596   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:31.807682   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:33.807723   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:32.813502   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:32.813556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:35.354372   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:35.369226   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:35.369303   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:35.408374   73900 cri.go:89] found id: ""
	I0930 21:09:35.408402   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.408414   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:35.408421   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:35.408481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:35.442390   73900 cri.go:89] found id: ""
	I0930 21:09:35.442432   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.442440   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:35.442445   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:35.442524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:35.479624   73900 cri.go:89] found id: ""
	I0930 21:09:35.479651   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.479659   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:35.479664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:35.479711   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:35.518580   73900 cri.go:89] found id: ""
	I0930 21:09:35.518609   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.518617   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:35.518623   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:35.518675   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:35.553547   73900 cri.go:89] found id: ""
	I0930 21:09:35.553582   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.553590   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:35.553604   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:35.553669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:35.596444   73900 cri.go:89] found id: ""
	I0930 21:09:35.596476   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.596487   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:35.596495   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:35.596583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:35.634232   73900 cri.go:89] found id: ""
	I0930 21:09:35.634259   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.634268   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:35.634274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:35.634322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:35.669637   73900 cri.go:89] found id: ""
	I0930 21:09:35.669672   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.669683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:35.669694   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:35.669706   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:35.719433   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:35.719469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:35.733383   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:35.733415   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:35.811860   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:35.811887   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:35.811913   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:35.896206   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:35.896272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:35.569694   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:37.570548   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:36.968789   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.968959   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:35.814006   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.306676   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.435999   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:38.450091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:38.450152   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:38.489127   73900 cri.go:89] found id: ""
	I0930 21:09:38.489153   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.489161   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:38.489166   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:38.489221   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:38.520760   73900 cri.go:89] found id: ""
	I0930 21:09:38.520783   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.520792   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:38.520798   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:38.520847   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:38.556279   73900 cri.go:89] found id: ""
	I0930 21:09:38.556306   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.556315   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:38.556319   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:38.556379   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:38.590804   73900 cri.go:89] found id: ""
	I0930 21:09:38.590827   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.590834   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:38.590840   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:38.590906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:38.624765   73900 cri.go:89] found id: ""
	I0930 21:09:38.624792   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.624800   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:38.624805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:38.624857   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:38.660587   73900 cri.go:89] found id: ""
	I0930 21:09:38.660614   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.660625   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:38.660635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:38.660702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:38.693314   73900 cri.go:89] found id: ""
	I0930 21:09:38.693352   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.693362   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:38.693371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:38.693441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:38.729163   73900 cri.go:89] found id: ""
	I0930 21:09:38.729197   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.729212   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:38.729223   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:38.729235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:38.780787   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:38.780828   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:38.794983   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:38.795009   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:38.861886   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:38.861911   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:38.861926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:38.936958   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:38.936994   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:41.479891   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:41.493041   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:41.493106   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:41.528855   73900 cri.go:89] found id: ""
	I0930 21:09:41.528889   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.528900   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:41.528906   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:41.528967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:41.565193   73900 cri.go:89] found id: ""
	I0930 21:09:41.565216   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.565224   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:41.565230   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:41.565289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:41.599503   73900 cri.go:89] found id: ""
	I0930 21:09:41.599538   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.599547   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:41.599553   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:41.599611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:41.636623   73900 cri.go:89] found id: ""
	I0930 21:09:41.636651   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.636663   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:41.636671   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:41.636728   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:41.671727   73900 cri.go:89] found id: ""
	I0930 21:09:41.671753   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.671760   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:41.671765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:41.671819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:41.705499   73900 cri.go:89] found id: ""
	I0930 21:09:41.705533   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.705543   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:41.705549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:41.705602   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:41.738262   73900 cri.go:89] found id: ""
	I0930 21:09:41.738285   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.738292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:41.738297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:41.738351   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:41.774232   73900 cri.go:89] found id: ""
	I0930 21:09:41.774261   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.774269   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:41.774277   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:41.774288   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:41.826060   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:41.826093   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:41.839308   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:41.839335   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:41.908599   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:41.908626   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:41.908640   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:41.986337   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:41.986375   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:40.069900   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:42.070035   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:41.469908   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:43.968111   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:40.307200   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:42.308356   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:44.807663   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:44.527015   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:44.539973   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:44.540036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:44.575985   73900 cri.go:89] found id: ""
	I0930 21:09:44.576012   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.576021   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:44.576027   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:44.576076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:44.612693   73900 cri.go:89] found id: ""
	I0930 21:09:44.612724   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.612736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:44.612743   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:44.612809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:44.646515   73900 cri.go:89] found id: ""
	I0930 21:09:44.646544   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.646555   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:44.646562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:44.646623   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:44.679980   73900 cri.go:89] found id: ""
	I0930 21:09:44.680011   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.680022   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:44.680030   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:44.680089   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:44.714078   73900 cri.go:89] found id: ""
	I0930 21:09:44.714117   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.714128   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:44.714135   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:44.714193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:44.748491   73900 cri.go:89] found id: ""
	I0930 21:09:44.748521   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.748531   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:44.748539   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:44.748618   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:44.780902   73900 cri.go:89] found id: ""
	I0930 21:09:44.780936   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.780947   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:44.780955   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:44.781013   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:44.817944   73900 cri.go:89] found id: ""
	I0930 21:09:44.817999   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.818011   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:44.818022   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:44.818038   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:44.873896   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:44.873926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:44.887829   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:44.887858   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:44.957562   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:44.957584   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:44.957598   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:45.037892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:45.037934   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:47.583013   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:47.595799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:47.595870   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:47.630348   73900 cri.go:89] found id: ""
	I0930 21:09:47.630377   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.630385   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:47.630391   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:47.630444   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:47.663416   73900 cri.go:89] found id: ""
	I0930 21:09:47.663440   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.663448   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:47.663454   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:47.663500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:47.700145   73900 cri.go:89] found id: ""
	I0930 21:09:47.700174   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.700184   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:47.700192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:47.700253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:47.732539   73900 cri.go:89] found id: ""
	I0930 21:09:47.732567   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.732577   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:47.732583   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:47.732637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:44.569951   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:46.570501   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:48.574018   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:45.971063   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:48.468661   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:47.307709   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:49.806843   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:47.764470   73900 cri.go:89] found id: ""
	I0930 21:09:47.764493   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.764501   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:47.764507   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:47.764553   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:47.802365   73900 cri.go:89] found id: ""
	I0930 21:09:47.802393   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.802403   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:47.802411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:47.802468   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:47.836504   73900 cri.go:89] found id: ""
	I0930 21:09:47.836531   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.836542   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:47.836549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:47.836611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:47.870315   73900 cri.go:89] found id: ""
	I0930 21:09:47.870338   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.870351   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:47.870359   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:47.870370   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:47.919974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:47.920011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:47.934157   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:47.934190   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:48.003046   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:48.003072   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:48.003085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:48.084947   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:48.084985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:50.624791   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:50.638118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:50.638196   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:50.672448   73900 cri.go:89] found id: ""
	I0930 21:09:50.672479   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.672488   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:50.672503   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:50.672557   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:50.706057   73900 cri.go:89] found id: ""
	I0930 21:09:50.706080   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.706088   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:50.706093   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:50.706142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:50.738101   73900 cri.go:89] found id: ""
	I0930 21:09:50.738126   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.738134   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:50.738140   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:50.738207   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:50.772483   73900 cri.go:89] found id: ""
	I0930 21:09:50.772508   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.772516   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:50.772522   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:50.772581   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:50.805169   73900 cri.go:89] found id: ""
	I0930 21:09:50.805200   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.805211   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:50.805220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:50.805276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:50.842144   73900 cri.go:89] found id: ""
	I0930 21:09:50.842168   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.842176   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:50.842182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:50.842236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:50.875512   73900 cri.go:89] found id: ""
	I0930 21:09:50.875563   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.875575   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:50.875582   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:50.875643   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:50.909549   73900 cri.go:89] found id: ""
	I0930 21:09:50.909580   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.909591   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:50.909599   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:50.909610   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:50.962064   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:50.962098   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:50.976979   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:50.977012   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:51.053784   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:51.053815   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:51.053833   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:51.130939   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:51.130975   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:51.069919   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:53.568708   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:50.468737   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:52.968935   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:52.306733   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:54.306875   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:53.667675   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:53.680381   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:53.680449   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:53.712759   73900 cri.go:89] found id: ""
	I0930 21:09:53.712791   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.712800   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:53.712807   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:53.712871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:53.748958   73900 cri.go:89] found id: ""
	I0930 21:09:53.748990   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.749002   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:53.749009   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:53.749078   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:53.783243   73900 cri.go:89] found id: ""
	I0930 21:09:53.783272   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.783282   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:53.783289   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:53.783382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:53.823848   73900 cri.go:89] found id: ""
	I0930 21:09:53.823875   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.823883   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:53.823890   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:53.823941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:53.865607   73900 cri.go:89] found id: ""
	I0930 21:09:53.865635   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.865643   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:53.865648   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:53.865693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:53.900888   73900 cri.go:89] found id: ""
	I0930 21:09:53.900912   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.900920   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:53.900926   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:53.900985   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:53.933688   73900 cri.go:89] found id: ""
	I0930 21:09:53.933717   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.933728   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:53.933736   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:53.933798   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:53.968702   73900 cri.go:89] found id: ""
	I0930 21:09:53.968731   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.968740   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:53.968749   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:53.968760   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:54.021588   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:54.021626   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:54.036681   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:54.036719   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:54.112189   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:54.112209   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:54.112223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:54.185028   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:54.185085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:56.725146   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:56.739358   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:56.739421   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:56.779278   73900 cri.go:89] found id: ""
	I0930 21:09:56.779313   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.779322   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:56.779329   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:56.779377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:56.815972   73900 cri.go:89] found id: ""
	I0930 21:09:56.816000   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.816011   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:56.816018   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:56.816084   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:56.849425   73900 cri.go:89] found id: ""
	I0930 21:09:56.849458   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.849471   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:56.849478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:56.849542   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:56.885483   73900 cri.go:89] found id: ""
	I0930 21:09:56.885510   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.885520   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:56.885527   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:56.885586   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:56.917832   73900 cri.go:89] found id: ""
	I0930 21:09:56.917862   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.917872   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:56.917879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:56.917932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:56.951613   73900 cri.go:89] found id: ""
	I0930 21:09:56.951643   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.951654   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:56.951664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:56.951726   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:56.987577   73900 cri.go:89] found id: ""
	I0930 21:09:56.987608   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.987620   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:56.987628   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:56.987691   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:57.024871   73900 cri.go:89] found id: ""
	I0930 21:09:57.024903   73900 logs.go:276] 0 containers: []
	W0930 21:09:57.024912   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:57.024920   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:57.024935   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:57.038279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:57.038309   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:57.111955   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:57.111985   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:57.111998   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:57.193719   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:57.193755   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:57.230058   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:57.230085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:55.568928   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:58.069462   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:55.467583   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:57.968380   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:59.969131   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:56.807753   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:58.808055   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:59.780762   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:59.794210   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:59.794277   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:59.828258   73900 cri.go:89] found id: ""
	I0930 21:09:59.828287   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.828298   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:59.828306   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:59.828369   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:59.868295   73900 cri.go:89] found id: ""
	I0930 21:09:59.868331   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.868353   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:59.868363   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:59.868437   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:59.900298   73900 cri.go:89] found id: ""
	I0930 21:09:59.900326   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.900337   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:59.900343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:59.900403   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:59.934081   73900 cri.go:89] found id: ""
	I0930 21:09:59.934108   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.934120   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:59.934127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:59.934183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:59.970564   73900 cri.go:89] found id: ""
	I0930 21:09:59.970592   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.970600   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:59.970605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:59.970652   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:00.006215   73900 cri.go:89] found id: ""
	I0930 21:10:00.006249   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.006259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:00.006270   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:00.006348   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:00.040106   73900 cri.go:89] found id: ""
	I0930 21:10:00.040135   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.040144   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:00.040150   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:00.040202   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:00.079310   73900 cri.go:89] found id: ""
	I0930 21:10:00.079345   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.079354   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:00.079365   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:00.079378   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:00.161243   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:00.161284   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:00.198911   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:00.198941   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:00.247697   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:00.247735   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:00.260905   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:00.260933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:00.332502   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:00.569218   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.569371   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.468439   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:04.968585   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:00.808753   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:03.306574   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.833204   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:02.846807   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:02.846893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:02.882386   73900 cri.go:89] found id: ""
	I0930 21:10:02.882420   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.882431   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:02.882439   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:02.882504   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:02.918589   73900 cri.go:89] found id: ""
	I0930 21:10:02.918617   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.918633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:02.918642   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:02.918722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:02.952758   73900 cri.go:89] found id: ""
	I0930 21:10:02.952789   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.952799   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:02.952806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:02.952871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:02.991406   73900 cri.go:89] found id: ""
	I0930 21:10:02.991439   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.991448   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:02.991454   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:02.991511   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:03.030075   73900 cri.go:89] found id: ""
	I0930 21:10:03.030104   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.030112   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:03.030121   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:03.030172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:03.063630   73900 cri.go:89] found id: ""
	I0930 21:10:03.063654   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.063662   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:03.063668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:03.063718   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:03.098607   73900 cri.go:89] found id: ""
	I0930 21:10:03.098636   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.098644   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:03.098649   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:03.098702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:03.133161   73900 cri.go:89] found id: ""
	I0930 21:10:03.133189   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.133198   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:03.133206   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:03.133217   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:03.211046   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:03.211083   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:03.252585   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:03.252615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:03.307019   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:03.307049   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:03.320781   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:03.320811   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:03.408645   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:05.909638   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:05.922674   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:05.922744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:05.955264   73900 cri.go:89] found id: ""
	I0930 21:10:05.955305   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.955318   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:05.955326   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:05.955378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:05.991055   73900 cri.go:89] found id: ""
	I0930 21:10:05.991100   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.991122   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:05.991130   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:05.991194   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:06.025725   73900 cri.go:89] found id: ""
	I0930 21:10:06.025755   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.025766   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:06.025773   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:06.025832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:06.067700   73900 cri.go:89] found id: ""
	I0930 21:10:06.067726   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.067736   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:06.067743   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:06.067801   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:06.102729   73900 cri.go:89] found id: ""
	I0930 21:10:06.102760   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.102771   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:06.102784   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:06.102845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:06.137120   73900 cri.go:89] found id: ""
	I0930 21:10:06.137148   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.137159   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:06.137164   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:06.137215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:06.169985   73900 cri.go:89] found id: ""
	I0930 21:10:06.170014   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.170023   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:06.170029   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:06.170082   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:06.206928   73900 cri.go:89] found id: ""
	I0930 21:10:06.206951   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.206959   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:06.206967   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:06.206977   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:06.258835   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:06.258870   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:06.273527   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:06.273556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:06.351335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:06.351359   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:06.351373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:06.423412   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:06.423450   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:04.569756   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:07.069437   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:09.074024   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:06.969500   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:09.471298   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:05.807932   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:08.306749   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:08.968986   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:08.984075   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:08.984139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:09.016815   73900 cri.go:89] found id: ""
	I0930 21:10:09.016847   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.016858   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:09.016864   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:09.016928   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:09.051603   73900 cri.go:89] found id: ""
	I0930 21:10:09.051626   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.051633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:09.051639   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:09.051693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:09.088820   73900 cri.go:89] found id: ""
	I0930 21:10:09.088856   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.088870   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:09.088884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:09.088949   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:09.124032   73900 cri.go:89] found id: ""
	I0930 21:10:09.124064   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.124076   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:09.124083   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:09.124140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:09.177129   73900 cri.go:89] found id: ""
	I0930 21:10:09.177161   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.177172   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:09.177178   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:09.177228   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:09.211490   73900 cri.go:89] found id: ""
	I0930 21:10:09.211513   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.211521   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:09.211540   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:09.211605   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:09.252187   73900 cri.go:89] found id: ""
	I0930 21:10:09.252211   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.252221   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:09.252229   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:09.252289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:09.286970   73900 cri.go:89] found id: ""
	I0930 21:10:09.287004   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.287012   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:09.287020   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:09.287031   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:09.369387   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:09.369410   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:09.369422   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:09.450685   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:09.450733   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:09.491302   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:09.491331   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:09.540183   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:09.540219   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.054793   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:12.068635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:12.068717   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:12.103118   73900 cri.go:89] found id: ""
	I0930 21:10:12.103140   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.103149   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:12.103154   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:12.103219   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:12.137992   73900 cri.go:89] found id: ""
	I0930 21:10:12.138020   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.138031   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:12.138040   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:12.138103   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:12.175559   73900 cri.go:89] found id: ""
	I0930 21:10:12.175591   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.175609   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:12.175616   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:12.175678   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:12.209630   73900 cri.go:89] found id: ""
	I0930 21:10:12.209655   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.209666   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:12.209672   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:12.209735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:12.245844   73900 cri.go:89] found id: ""
	I0930 21:10:12.245879   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.245891   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:12.245901   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:12.245961   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:12.280385   73900 cri.go:89] found id: ""
	I0930 21:10:12.280412   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.280420   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:12.280426   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:12.280484   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:12.315424   73900 cri.go:89] found id: ""
	I0930 21:10:12.315453   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.315463   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:12.315473   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:12.315566   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:12.349223   73900 cri.go:89] found id: ""
	I0930 21:10:12.349251   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.349270   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:12.349279   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:12.349291   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.362360   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:12.362397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:12.432060   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:12.432084   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:12.432101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:12.506059   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:12.506096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:12.541319   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:12.541348   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:11.568740   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:13.569690   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:11.968234   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:13.968634   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:10.306903   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:12.307072   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:14.807562   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:15.098852   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:15.111919   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:15.112001   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:15.149174   73900 cri.go:89] found id: ""
	I0930 21:10:15.149206   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.149216   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:15.149223   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:15.149286   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:15.187283   73900 cri.go:89] found id: ""
	I0930 21:10:15.187316   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.187326   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:15.187333   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:15.187392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:15.223896   73900 cri.go:89] found id: ""
	I0930 21:10:15.223922   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.223933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:15.223940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:15.224000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:15.260530   73900 cri.go:89] found id: ""
	I0930 21:10:15.260559   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.260567   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:15.260573   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:15.260634   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:15.296319   73900 cri.go:89] found id: ""
	I0930 21:10:15.296346   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.296357   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:15.296363   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:15.296425   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:15.333785   73900 cri.go:89] found id: ""
	I0930 21:10:15.333830   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.333843   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:15.333856   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:15.333932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:15.368235   73900 cri.go:89] found id: ""
	I0930 21:10:15.368268   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.368280   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:15.368288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:15.368354   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:15.408155   73900 cri.go:89] found id: ""
	I0930 21:10:15.408184   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.408192   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:15.408200   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:15.408210   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:15.462018   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:15.462058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:15.477345   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:15.477376   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:15.558398   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:15.558423   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:15.558442   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:15.662269   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:15.662311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:15.569988   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.069056   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:16.467859   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.468764   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:17.307469   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:19.809316   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.199477   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:18.213235   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:18.213320   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:18.250379   73900 cri.go:89] found id: ""
	I0930 21:10:18.250409   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.250418   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:18.250424   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:18.250515   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:18.283381   73900 cri.go:89] found id: ""
	I0930 21:10:18.283407   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.283416   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:18.283422   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:18.283482   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:18.321601   73900 cri.go:89] found id: ""
	I0930 21:10:18.321635   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.321646   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:18.321659   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:18.321720   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:18.354210   73900 cri.go:89] found id: ""
	I0930 21:10:18.354242   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.354254   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:18.354262   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:18.354330   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:18.391982   73900 cri.go:89] found id: ""
	I0930 21:10:18.392019   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.392029   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:18.392035   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:18.392150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:18.428826   73900 cri.go:89] found id: ""
	I0930 21:10:18.428851   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.428862   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:18.428870   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:18.428927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:18.465841   73900 cri.go:89] found id: ""
	I0930 21:10:18.465868   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.465878   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:18.465887   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:18.465934   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:18.502747   73900 cri.go:89] found id: ""
	I0930 21:10:18.502775   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.502783   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:18.502793   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:18.502807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:18.558025   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:18.558064   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:18.572356   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:18.572383   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:18.642994   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:18.643020   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:18.643033   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:18.722804   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:18.722845   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.262790   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:21.276427   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:21.276510   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:21.323245   73900 cri.go:89] found id: ""
	I0930 21:10:21.323274   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.323284   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:21.323291   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:21.323377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:21.381684   73900 cri.go:89] found id: ""
	I0930 21:10:21.381725   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.381736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:21.381744   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:21.381813   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:21.428818   73900 cri.go:89] found id: ""
	I0930 21:10:21.428841   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.428849   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:21.428854   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:21.428901   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:21.462906   73900 cri.go:89] found id: ""
	I0930 21:10:21.462935   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.462944   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:21.462949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:21.462995   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:21.502417   73900 cri.go:89] found id: ""
	I0930 21:10:21.502452   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.502464   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:21.502471   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:21.502535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:21.540004   73900 cri.go:89] found id: ""
	I0930 21:10:21.540037   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.540048   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:21.540056   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:21.540105   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:21.574898   73900 cri.go:89] found id: ""
	I0930 21:10:21.574929   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.574937   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:21.574942   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:21.574999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:21.609438   73900 cri.go:89] found id: ""
	I0930 21:10:21.609465   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.609473   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:21.609496   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:21.609524   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.646651   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:21.646679   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:21.702406   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:21.702451   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:21.716226   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:21.716260   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:21.790089   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:21.790115   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:21.790128   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:20.070823   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.568856   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:20.968069   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.968208   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.307376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:24.808780   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:24.368291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:24.381517   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:24.381588   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:24.416535   73900 cri.go:89] found id: ""
	I0930 21:10:24.416559   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.416570   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:24.416577   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:24.416635   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:24.454444   73900 cri.go:89] found id: ""
	I0930 21:10:24.454472   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.454480   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:24.454485   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:24.454537   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:24.492334   73900 cri.go:89] found id: ""
	I0930 21:10:24.492359   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.492367   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:24.492373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:24.492419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:24.527590   73900 cri.go:89] found id: ""
	I0930 21:10:24.527622   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.527633   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:24.527642   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:24.527708   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:24.564819   73900 cri.go:89] found id: ""
	I0930 21:10:24.564844   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.564853   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:24.564858   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:24.564915   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:24.599367   73900 cri.go:89] found id: ""
	I0930 21:10:24.599390   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.599398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:24.599403   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:24.599450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:24.636738   73900 cri.go:89] found id: ""
	I0930 21:10:24.636767   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.636778   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:24.636785   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:24.636845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:24.669607   73900 cri.go:89] found id: ""
	I0930 21:10:24.669640   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.669651   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:24.669663   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:24.669680   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:24.722662   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:24.722696   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:24.736150   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:24.736179   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:24.812022   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:24.812053   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:24.812069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.891291   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:24.891330   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.430595   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:27.443990   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:27.444054   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:27.480204   73900 cri.go:89] found id: ""
	I0930 21:10:27.480230   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.480237   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:27.480243   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:27.480297   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:27.516959   73900 cri.go:89] found id: ""
	I0930 21:10:27.516982   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.516989   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:27.516995   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:27.517041   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:27.549717   73900 cri.go:89] found id: ""
	I0930 21:10:27.549745   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.549758   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:27.549769   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:27.549821   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:27.584512   73900 cri.go:89] found id: ""
	I0930 21:10:27.584539   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.584549   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:27.584560   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:27.584619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:27.623551   73900 cri.go:89] found id: ""
	I0930 21:10:27.623586   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.623603   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:27.623612   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:27.623679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:27.662453   73900 cri.go:89] found id: ""
	I0930 21:10:27.662478   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.662486   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:27.662493   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:27.662554   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:27.695665   73900 cri.go:89] found id: ""
	I0930 21:10:27.695693   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.695701   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:27.695707   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:27.695765   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:27.729090   73900 cri.go:89] found id: ""
	I0930 21:10:27.729129   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.729137   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:27.729146   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:27.729155   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.570129   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:26.572751   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.069340   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:25.468598   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.469443   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.970417   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.307766   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.806538   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.816186   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:27.816230   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.854451   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:27.854485   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:27.905674   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:27.905709   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:27.918889   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:27.918917   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:27.989739   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.490514   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:30.502735   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:30.502810   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:30.535874   73900 cri.go:89] found id: ""
	I0930 21:10:30.535902   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.535914   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:30.535922   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:30.535989   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:30.570603   73900 cri.go:89] found id: ""
	I0930 21:10:30.570627   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.570634   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:30.570643   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:30.570689   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:30.605225   73900 cri.go:89] found id: ""
	I0930 21:10:30.605255   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.605266   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:30.605273   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:30.605333   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:30.640810   73900 cri.go:89] found id: ""
	I0930 21:10:30.640839   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.640849   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:30.640857   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:30.640914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:30.673101   73900 cri.go:89] found id: ""
	I0930 21:10:30.673129   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.673137   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:30.673142   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:30.673189   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:30.704332   73900 cri.go:89] found id: ""
	I0930 21:10:30.704356   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.704366   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:30.704373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:30.704440   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:30.738463   73900 cri.go:89] found id: ""
	I0930 21:10:30.738494   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.738506   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:30.738516   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:30.738579   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:30.772115   73900 cri.go:89] found id: ""
	I0930 21:10:30.772153   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.772164   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:30.772175   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:30.772193   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:30.850683   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.850707   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:30.850720   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:30.930674   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:30.930718   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:30.975781   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:30.975819   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:31.030566   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:31.030613   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:31.070216   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.568935   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:32.468224   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:34.968557   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:31.807408   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.807669   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.544354   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:33.557613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:33.557692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:33.594372   73900 cri.go:89] found id: ""
	I0930 21:10:33.594394   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.594401   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:33.594406   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:33.594455   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:33.632026   73900 cri.go:89] found id: ""
	I0930 21:10:33.632048   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.632056   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:33.632061   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:33.632113   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:33.666168   73900 cri.go:89] found id: ""
	I0930 21:10:33.666201   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.666213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:33.666219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:33.666269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:33.697772   73900 cri.go:89] found id: ""
	I0930 21:10:33.697801   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.697810   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:33.697816   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:33.697864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:33.732821   73900 cri.go:89] found id: ""
	I0930 21:10:33.732851   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.732862   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:33.732869   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:33.732952   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:33.770646   73900 cri.go:89] found id: ""
	I0930 21:10:33.770682   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.770693   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:33.770701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:33.770756   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:33.804803   73900 cri.go:89] found id: ""
	I0930 21:10:33.804831   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.804842   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:33.804848   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:33.804921   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:33.838455   73900 cri.go:89] found id: ""
	I0930 21:10:33.838484   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.838495   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:33.838505   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:33.838523   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:33.879785   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:33.879812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:33.934586   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:33.934623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:33.948250   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:33.948293   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:34.023021   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:34.023054   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:34.023069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:36.604173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:36.616668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:36.616735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:36.650716   73900 cri.go:89] found id: ""
	I0930 21:10:36.650748   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.650757   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:36.650767   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:36.650833   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:36.685705   73900 cri.go:89] found id: ""
	I0930 21:10:36.685739   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.685751   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:36.685758   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:36.685819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:36.719895   73900 cri.go:89] found id: ""
	I0930 21:10:36.719922   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.719932   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:36.719939   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:36.720006   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:36.753123   73900 cri.go:89] found id: ""
	I0930 21:10:36.753148   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.753159   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:36.753166   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:36.753231   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:36.790023   73900 cri.go:89] found id: ""
	I0930 21:10:36.790054   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.790066   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:36.790073   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:36.790135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:36.825280   73900 cri.go:89] found id: ""
	I0930 21:10:36.825314   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.825324   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:36.825343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:36.825411   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:36.859028   73900 cri.go:89] found id: ""
	I0930 21:10:36.859053   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.859060   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:36.859066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:36.859125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:36.894952   73900 cri.go:89] found id: ""
	I0930 21:10:36.894980   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.894988   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:36.894996   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:36.895010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:36.968214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:36.968241   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:36.968256   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:37.047866   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:37.047903   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:37.088671   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:37.088705   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:37.144014   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:37.144058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:36.068920   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:38.069544   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:36.969475   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:39.469207   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:35.808654   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:38.306701   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:39.657874   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:39.671042   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:39.671100   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:39.706210   73900 cri.go:89] found id: ""
	I0930 21:10:39.706235   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.706243   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:39.706248   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:39.706295   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:39.743194   73900 cri.go:89] found id: ""
	I0930 21:10:39.743218   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.743226   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:39.743232   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:39.743280   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:39.780681   73900 cri.go:89] found id: ""
	I0930 21:10:39.780707   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.780715   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:39.780720   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:39.780774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:39.815841   73900 cri.go:89] found id: ""
	I0930 21:10:39.815865   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.815874   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:39.815879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:39.815933   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:39.849497   73900 cri.go:89] found id: ""
	I0930 21:10:39.849523   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.849534   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:39.849541   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:39.849603   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:39.883476   73900 cri.go:89] found id: ""
	I0930 21:10:39.883507   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.883519   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:39.883562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:39.883633   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:39.918300   73900 cri.go:89] found id: ""
	I0930 21:10:39.918329   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.918338   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:39.918343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:39.918392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:39.955751   73900 cri.go:89] found id: ""
	I0930 21:10:39.955780   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.955788   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:39.955795   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:39.955807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:40.010994   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:40.011035   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:40.025992   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:40.026022   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:40.097709   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:40.097731   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:40.097748   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:40.176790   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:40.176824   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:42.713838   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:42.729806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:42.729885   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:40.070503   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.568444   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:41.968357   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:44.469223   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:40.308072   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.807489   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.765449   73900 cri.go:89] found id: ""
	I0930 21:10:42.765483   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.765491   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:42.765498   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:42.765555   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:42.802556   73900 cri.go:89] found id: ""
	I0930 21:10:42.802584   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.802604   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:42.802612   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:42.802693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:42.836537   73900 cri.go:89] found id: ""
	I0930 21:10:42.836568   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.836585   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:42.836598   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:42.836662   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:42.870475   73900 cri.go:89] found id: ""
	I0930 21:10:42.870503   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.870511   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:42.870526   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:42.870589   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:42.907061   73900 cri.go:89] found id: ""
	I0930 21:10:42.907090   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.907098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:42.907103   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:42.907153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:42.941607   73900 cri.go:89] found id: ""
	I0930 21:10:42.941632   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.941640   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:42.941646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:42.941701   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:42.977073   73900 cri.go:89] found id: ""
	I0930 21:10:42.977097   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.977105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:42.977111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:42.977159   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:43.010838   73900 cri.go:89] found id: ""
	I0930 21:10:43.010859   73900 logs.go:276] 0 containers: []
	W0930 21:10:43.010867   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:43.010875   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:43.010886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:43.061264   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:43.061299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:43.075917   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:43.075950   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:43.137088   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:43.137111   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:43.137126   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:43.219393   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:43.219440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:45.761752   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:45.775864   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:45.775942   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:45.810693   73900 cri.go:89] found id: ""
	I0930 21:10:45.810724   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.810734   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:45.810740   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:45.810797   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:45.848360   73900 cri.go:89] found id: ""
	I0930 21:10:45.848399   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.848410   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:45.848418   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:45.848475   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:45.885504   73900 cri.go:89] found id: ""
	I0930 21:10:45.885550   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.885560   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:45.885565   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:45.885616   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:45.919747   73900 cri.go:89] found id: ""
	I0930 21:10:45.919776   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.919784   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:45.919789   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:45.919843   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:45.953787   73900 cri.go:89] found id: ""
	I0930 21:10:45.953820   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.953831   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:45.953839   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:45.953893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:45.990145   73900 cri.go:89] found id: ""
	I0930 21:10:45.990174   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.990184   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:45.990192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:45.990253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:46.023359   73900 cri.go:89] found id: ""
	I0930 21:10:46.023383   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.023391   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:46.023396   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:46.023447   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:46.057460   73900 cri.go:89] found id: ""
	I0930 21:10:46.057493   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.057504   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:46.057514   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:46.057533   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:46.097082   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:46.097109   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:46.147921   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:46.147960   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:46.161204   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:46.161232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:46.224308   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:46.224336   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:46.224351   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:44.568918   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:46.569353   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.569656   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:46.967674   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.967998   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:45.306917   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:47.806333   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:49.807846   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.805668   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:48.818569   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:48.818663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:48.856783   73900 cri.go:89] found id: ""
	I0930 21:10:48.856815   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.856827   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:48.856834   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:48.856896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:48.889185   73900 cri.go:89] found id: ""
	I0930 21:10:48.889217   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.889229   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:48.889236   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:48.889306   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:48.922013   73900 cri.go:89] found id: ""
	I0930 21:10:48.922041   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.922050   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:48.922055   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:48.922107   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:48.956818   73900 cri.go:89] found id: ""
	I0930 21:10:48.956848   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.956858   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:48.956866   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:48.956929   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:48.994942   73900 cri.go:89] found id: ""
	I0930 21:10:48.994975   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.994985   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:48.994991   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:48.995052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:49.031448   73900 cri.go:89] found id: ""
	I0930 21:10:49.031479   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.031491   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:49.031500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:49.031583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:49.066570   73900 cri.go:89] found id: ""
	I0930 21:10:49.066600   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.066608   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:49.066613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:49.066658   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:49.100952   73900 cri.go:89] found id: ""
	I0930 21:10:49.100981   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.100992   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:49.101000   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:49.101010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:49.176423   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:49.176458   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:49.212358   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:49.212387   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:49.263177   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:49.263227   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:49.275940   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:49.275969   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:49.346915   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:51.847761   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:51.860571   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:51.860646   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:51.894863   73900 cri.go:89] found id: ""
	I0930 21:10:51.894896   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.894906   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:51.894914   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:51.894978   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:51.927977   73900 cri.go:89] found id: ""
	I0930 21:10:51.928007   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.928018   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:51.928025   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:51.928083   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:51.962894   73900 cri.go:89] found id: ""
	I0930 21:10:51.962924   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.962933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:51.962940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:51.962999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:51.998453   73900 cri.go:89] found id: ""
	I0930 21:10:51.998482   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.998493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:51.998500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:51.998562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:52.033039   73900 cri.go:89] found id: ""
	I0930 21:10:52.033066   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.033075   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:52.033080   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:52.033139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:52.067222   73900 cri.go:89] found id: ""
	I0930 21:10:52.067254   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.067267   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:52.067274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:52.067341   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:52.102414   73900 cri.go:89] found id: ""
	I0930 21:10:52.102439   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.102448   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:52.102453   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:52.102498   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:52.135175   73900 cri.go:89] found id: ""
	I0930 21:10:52.135204   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.135214   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:52.135225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:52.135239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:52.185736   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:52.185779   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:52.198756   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:52.198792   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:52.264816   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:52.264847   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:52.264859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:52.347189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:52.347229   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:50.569765   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:53.068745   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:50.968885   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:52.970855   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:52.307245   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:54.308516   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:54.887502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:54.900067   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:54.900153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:54.939214   73900 cri.go:89] found id: ""
	I0930 21:10:54.939241   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.939249   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:54.939259   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:54.939313   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:54.973451   73900 cri.go:89] found id: ""
	I0930 21:10:54.973475   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.973483   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:54.973488   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:54.973541   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:55.007815   73900 cri.go:89] found id: ""
	I0930 21:10:55.007841   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.007850   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:55.007855   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:55.007914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:55.040861   73900 cri.go:89] found id: ""
	I0930 21:10:55.040891   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.040899   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:55.040905   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:55.040957   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:55.076053   73900 cri.go:89] found id: ""
	I0930 21:10:55.076086   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.076098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:55.076111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:55.076172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:55.108768   73900 cri.go:89] found id: ""
	I0930 21:10:55.108797   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.108807   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:55.108814   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:55.108879   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:55.155283   73900 cri.go:89] found id: ""
	I0930 21:10:55.155316   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.155331   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:55.155338   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:55.155398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:55.189370   73900 cri.go:89] found id: ""
	I0930 21:10:55.189399   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.189408   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:55.189416   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:55.189432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:55.243067   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:55.243101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:55.257021   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:55.257051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:55.329381   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:55.329408   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:55.329423   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:55.405691   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:55.405762   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:55.069901   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.568914   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:55.468489   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.977733   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:56.806381   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:58.806880   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.957380   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:57.971160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:57.971245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:58.004401   73900 cri.go:89] found id: ""
	I0930 21:10:58.004446   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.004457   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:58.004465   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:58.004524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:58.038954   73900 cri.go:89] found id: ""
	I0930 21:10:58.038978   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.038986   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:58.038991   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:58.039036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:58.072801   73900 cri.go:89] found id: ""
	I0930 21:10:58.072830   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.072842   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:58.072849   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:58.072909   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:58.104908   73900 cri.go:89] found id: ""
	I0930 21:10:58.104936   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.104946   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:58.104953   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:58.105014   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:58.139693   73900 cri.go:89] found id: ""
	I0930 21:10:58.139725   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.139735   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:58.139741   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:58.139795   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:58.174149   73900 cri.go:89] found id: ""
	I0930 21:10:58.174180   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.174192   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:58.174199   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:58.174275   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:58.206067   73900 cri.go:89] found id: ""
	I0930 21:10:58.206094   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.206105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:58.206112   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:58.206167   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:58.240613   73900 cri.go:89] found id: ""
	I0930 21:10:58.240645   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.240653   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:58.240661   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:58.240674   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:58.306061   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:58.306086   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:58.306100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:58.386030   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:58.386073   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:58.425526   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:58.425562   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:58.483364   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:58.483409   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:00.998086   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:01.011934   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:01.012015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:01.047923   73900 cri.go:89] found id: ""
	I0930 21:11:01.047951   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.047960   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:01.047966   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:01.048024   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:01.082126   73900 cri.go:89] found id: ""
	I0930 21:11:01.082159   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.082170   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:01.082176   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:01.082224   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:01.117746   73900 cri.go:89] found id: ""
	I0930 21:11:01.117775   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.117787   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:01.117794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:01.117853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:01.153034   73900 cri.go:89] found id: ""
	I0930 21:11:01.153059   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.153067   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:01.153072   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:01.153128   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:01.188102   73900 cri.go:89] found id: ""
	I0930 21:11:01.188125   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.188133   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:01.188139   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:01.188193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:01.222120   73900 cri.go:89] found id: ""
	I0930 21:11:01.222147   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.222155   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:01.222161   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:01.222215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:01.258899   73900 cri.go:89] found id: ""
	I0930 21:11:01.258929   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.258941   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:01.258949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:01.259008   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:01.295473   73900 cri.go:89] found id: ""
	I0930 21:11:01.295504   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.295512   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:01.295521   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:01.295551   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:01.349134   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:01.349181   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:01.363113   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:01.363147   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:01.436589   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:01.436609   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:01.436622   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:01.516384   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:01.516420   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:00.069406   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:02.568203   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:00.468104   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:02.968911   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:00.807318   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:03.307184   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:04.075114   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:04.089300   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:04.089375   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:04.124385   73900 cri.go:89] found id: ""
	I0930 21:11:04.124411   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.124419   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:04.124425   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:04.124491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:04.158326   73900 cri.go:89] found id: ""
	I0930 21:11:04.158359   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.158367   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:04.158372   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:04.158419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:04.193477   73900 cri.go:89] found id: ""
	I0930 21:11:04.193507   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.193516   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:04.193521   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:04.193577   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:04.231697   73900 cri.go:89] found id: ""
	I0930 21:11:04.231723   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.231731   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:04.231737   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:04.231805   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:04.265879   73900 cri.go:89] found id: ""
	I0930 21:11:04.265903   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.265910   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:04.265915   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:04.265960   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:04.301382   73900 cri.go:89] found id: ""
	I0930 21:11:04.301421   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.301432   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:04.301440   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:04.301505   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:04.337496   73900 cri.go:89] found id: ""
	I0930 21:11:04.337521   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.337529   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:04.337534   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:04.337584   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:04.372631   73900 cri.go:89] found id: ""
	I0930 21:11:04.372665   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.372677   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:04.372700   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:04.372715   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:04.385279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:04.385311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:04.456700   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:04.456721   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:04.456732   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:04.537892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:04.537933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.574919   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:04.574947   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.128733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:07.142625   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:07.142687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:07.177450   73900 cri.go:89] found id: ""
	I0930 21:11:07.177475   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.177483   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:07.177488   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:07.177536   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:07.210158   73900 cri.go:89] found id: ""
	I0930 21:11:07.210184   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.210192   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:07.210197   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:07.210256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:07.242623   73900 cri.go:89] found id: ""
	I0930 21:11:07.242648   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.242656   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:07.242661   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:07.242705   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:07.277779   73900 cri.go:89] found id: ""
	I0930 21:11:07.277810   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.277821   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:07.277827   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:07.277881   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:07.316232   73900 cri.go:89] found id: ""
	I0930 21:11:07.316257   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.316263   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:07.316269   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:07.316326   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:07.360277   73900 cri.go:89] found id: ""
	I0930 21:11:07.360311   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.360322   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:07.360329   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:07.360391   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:07.412146   73900 cri.go:89] found id: ""
	I0930 21:11:07.412171   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.412181   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:07.412187   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:07.412247   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:07.447179   73900 cri.go:89] found id: ""
	I0930 21:11:07.447209   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.447217   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:07.447225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:07.447235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.496304   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:07.496340   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:07.510332   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:07.510373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:07.581335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:07.581375   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:07.581393   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:07.664522   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:07.664558   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.568787   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.069201   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:09.070583   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:05.468251   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.970913   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:05.308084   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.807712   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.201145   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:10.213605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:10.213663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:10.247875   73900 cri.go:89] found id: ""
	I0930 21:11:10.247904   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.247913   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:10.247918   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:10.247966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:10.280855   73900 cri.go:89] found id: ""
	I0930 21:11:10.280889   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.280900   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:10.280907   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:10.280967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:10.315638   73900 cri.go:89] found id: ""
	I0930 21:11:10.315661   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.315669   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:10.315675   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:10.315722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:10.357059   73900 cri.go:89] found id: ""
	I0930 21:11:10.357086   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.357094   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:10.357100   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:10.357154   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:10.389969   73900 cri.go:89] found id: ""
	I0930 21:11:10.389997   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.390004   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:10.390009   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:10.390060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:10.424424   73900 cri.go:89] found id: ""
	I0930 21:11:10.424454   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.424463   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:10.424469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:10.424533   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:10.457608   73900 cri.go:89] found id: ""
	I0930 21:11:10.457638   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.457650   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:10.457657   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:10.457712   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:10.490215   73900 cri.go:89] found id: ""
	I0930 21:11:10.490244   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.490253   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:10.490263   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:10.490278   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:10.554787   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:10.554814   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:10.554829   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:10.632428   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:10.632464   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:10.671018   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:10.671054   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:10.721187   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:10.721228   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:11.568643   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:13.568765   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.469296   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:12.968274   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.307487   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:12.307960   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:14.808087   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:13.234687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:13.250680   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:13.250778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:13.312468   73900 cri.go:89] found id: ""
	I0930 21:11:13.312499   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.312509   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:13.312516   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:13.312578   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:13.367051   73900 cri.go:89] found id: ""
	I0930 21:11:13.367073   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.367084   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:13.367091   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:13.367149   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:13.403019   73900 cri.go:89] found id: ""
	I0930 21:11:13.403055   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.403066   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:13.403074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:13.403135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:13.436942   73900 cri.go:89] found id: ""
	I0930 21:11:13.436967   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.436975   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:13.436981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:13.437047   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:13.470491   73900 cri.go:89] found id: ""
	I0930 21:11:13.470515   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.470523   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:13.470528   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:13.470619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:13.504078   73900 cri.go:89] found id: ""
	I0930 21:11:13.504112   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.504121   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:13.504127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:13.504201   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:13.536245   73900 cri.go:89] found id: ""
	I0930 21:11:13.536271   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.536292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:13.536297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:13.536357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:13.570794   73900 cri.go:89] found id: ""
	I0930 21:11:13.570817   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.570827   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:13.570836   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:13.570850   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:13.647919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:13.647941   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:13.647956   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:13.726113   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:13.726150   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:13.767916   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:13.767942   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:13.826362   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:13.826402   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.341252   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:16.354259   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:16.354344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:16.388627   73900 cri.go:89] found id: ""
	I0930 21:11:16.388650   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.388658   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:16.388663   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:16.388714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:16.424848   73900 cri.go:89] found id: ""
	I0930 21:11:16.424871   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.424878   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:16.424883   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:16.424941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:16.460604   73900 cri.go:89] found id: ""
	I0930 21:11:16.460626   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.460635   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:16.460640   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:16.460688   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:16.495908   73900 cri.go:89] found id: ""
	I0930 21:11:16.495932   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.495940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:16.495946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:16.496000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:16.531758   73900 cri.go:89] found id: ""
	I0930 21:11:16.531782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.531790   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:16.531796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:16.531853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:16.566756   73900 cri.go:89] found id: ""
	I0930 21:11:16.566782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.566792   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:16.566799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:16.566864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:16.601978   73900 cri.go:89] found id: ""
	I0930 21:11:16.602005   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.602012   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:16.602022   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:16.602081   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:16.636009   73900 cri.go:89] found id: ""
	I0930 21:11:16.636044   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.636056   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:16.636066   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:16.636079   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:16.688750   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:16.688786   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.702364   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:16.702404   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:16.767119   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:16.767175   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:16.767188   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:16.842052   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:16.842095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:15.571440   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:18.068441   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:15.469030   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:17.970779   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:17.307424   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:19.807193   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:19.380570   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:19.394687   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:19.394816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:19.427087   73900 cri.go:89] found id: ""
	I0930 21:11:19.427116   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.427124   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:19.427129   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:19.427178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:19.461074   73900 cri.go:89] found id: ""
	I0930 21:11:19.461098   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.461108   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:19.461122   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:19.461183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:19.494850   73900 cri.go:89] found id: ""
	I0930 21:11:19.494872   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.494880   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:19.494885   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:19.494943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:19.533448   73900 cri.go:89] found id: ""
	I0930 21:11:19.533480   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.533493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:19.533500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:19.533562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:19.569250   73900 cri.go:89] found id: ""
	I0930 21:11:19.569280   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.569291   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:19.569298   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:19.569383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:19.603182   73900 cri.go:89] found id: ""
	I0930 21:11:19.603206   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.603213   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:19.603219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:19.603268   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:19.637411   73900 cri.go:89] found id: ""
	I0930 21:11:19.637433   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.637441   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:19.637447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:19.637500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:19.672789   73900 cri.go:89] found id: ""
	I0930 21:11:19.672821   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.672831   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:19.672841   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:19.672854   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:19.755002   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:19.755039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:19.796499   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:19.796536   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:19.847235   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:19.847272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:19.861007   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:19.861032   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:19.931214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.431506   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:22.446129   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:22.446199   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:22.484093   73900 cri.go:89] found id: ""
	I0930 21:11:22.484119   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.484126   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:22.484132   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:22.484183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:22.516949   73900 cri.go:89] found id: ""
	I0930 21:11:22.516986   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.516994   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:22.517001   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:22.517056   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:22.550848   73900 cri.go:89] found id: ""
	I0930 21:11:22.550883   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.550898   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:22.550906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:22.550966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:22.586459   73900 cri.go:89] found id: ""
	I0930 21:11:22.586490   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.586498   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:22.586505   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:22.586627   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:22.620538   73900 cri.go:89] found id: ""
	I0930 21:11:22.620566   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.620578   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:22.620586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:22.620651   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:22.658256   73900 cri.go:89] found id: ""
	I0930 21:11:22.658279   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.658287   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:22.658292   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:22.658352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:22.690316   73900 cri.go:89] found id: ""
	I0930 21:11:22.690349   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.690365   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:22.690371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:22.690431   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:22.724234   73900 cri.go:89] found id: ""
	I0930 21:11:22.724264   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.724275   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:22.724285   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:22.724299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:20.570198   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:23.072974   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:20.468122   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.968686   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.307398   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:24.806972   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.777460   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:22.777503   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:22.790850   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:22.790879   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:22.866058   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.866079   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:22.866095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:22.947447   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:22.947488   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:25.486733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:25.499906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:25.499976   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:25.533819   73900 cri.go:89] found id: ""
	I0930 21:11:25.533842   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.533850   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:25.533857   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:25.533906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:25.568037   73900 cri.go:89] found id: ""
	I0930 21:11:25.568059   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.568066   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:25.568071   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:25.568129   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:25.601784   73900 cri.go:89] found id: ""
	I0930 21:11:25.601811   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.601819   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:25.601824   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:25.601876   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:25.638048   73900 cri.go:89] found id: ""
	I0930 21:11:25.638070   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.638078   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:25.638084   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:25.638140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:25.669946   73900 cri.go:89] found id: ""
	I0930 21:11:25.669968   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.669976   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:25.669981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:25.670028   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:25.701928   73900 cri.go:89] found id: ""
	I0930 21:11:25.701953   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.701961   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:25.701967   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:25.702025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:25.744295   73900 cri.go:89] found id: ""
	I0930 21:11:25.744327   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.744335   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:25.744341   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:25.744398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:25.780175   73900 cri.go:89] found id: ""
	I0930 21:11:25.780205   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.780213   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:25.780221   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:25.780232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:25.828774   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:25.828812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:25.842624   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:25.842649   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:25.916408   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:25.916451   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:25.916469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:25.997896   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:25.997932   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:25.570148   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:28.068628   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:25.467356   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:27.467782   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:29.467936   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:27.306939   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:29.807156   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:28.540994   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:28.553841   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:28.553904   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:28.588718   73900 cri.go:89] found id: ""
	I0930 21:11:28.588745   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.588754   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:28.588763   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:28.588809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:28.636210   73900 cri.go:89] found id: ""
	I0930 21:11:28.636237   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.636245   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:28.636250   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:28.636312   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:28.668714   73900 cri.go:89] found id: ""
	I0930 21:11:28.668743   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.668751   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:28.668757   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:28.668804   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:28.700413   73900 cri.go:89] found id: ""
	I0930 21:11:28.700449   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.700462   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:28.700469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:28.700522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:28.733409   73900 cri.go:89] found id: ""
	I0930 21:11:28.733433   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.733441   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:28.733446   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:28.733494   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:28.766917   73900 cri.go:89] found id: ""
	I0930 21:11:28.766957   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.766970   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:28.766979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:28.767046   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:28.801759   73900 cri.go:89] found id: ""
	I0930 21:11:28.801788   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.801798   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:28.801805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:28.801851   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:28.840724   73900 cri.go:89] found id: ""
	I0930 21:11:28.840761   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.840770   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:28.840790   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:28.840805   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:28.854426   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:28.854465   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:28.926650   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:28.926675   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:28.926690   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:29.005513   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:29.005569   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:29.047077   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:29.047102   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.603193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:31.615563   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:31.615631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:31.647656   73900 cri.go:89] found id: ""
	I0930 21:11:31.647685   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.647693   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:31.647699   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:31.647748   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:31.680004   73900 cri.go:89] found id: ""
	I0930 21:11:31.680037   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.680048   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:31.680056   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:31.680120   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:31.712562   73900 cri.go:89] found id: ""
	I0930 21:11:31.712588   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.712596   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:31.712602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:31.712650   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:31.747692   73900 cri.go:89] found id: ""
	I0930 21:11:31.747724   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.747732   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:31.747738   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:31.747803   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:31.781441   73900 cri.go:89] found id: ""
	I0930 21:11:31.781464   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.781472   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:31.781478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:31.781532   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:31.822227   73900 cri.go:89] found id: ""
	I0930 21:11:31.822252   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.822259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:31.822265   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:31.822322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:31.856531   73900 cri.go:89] found id: ""
	I0930 21:11:31.856555   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.856563   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:31.856568   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:31.856631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:31.894562   73900 cri.go:89] found id: ""
	I0930 21:11:31.894585   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.894593   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:31.894602   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:31.894618   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.946233   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:31.946271   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:31.960713   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:31.960744   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:32.036479   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:32.036497   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:32.036509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:32.111442   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:32.111477   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:30.068975   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:32.069794   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:31.468374   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:33.468986   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:31.809169   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:34.307372   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:34.651545   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:34.664058   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:34.664121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:34.697506   73900 cri.go:89] found id: ""
	I0930 21:11:34.697530   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.697539   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:34.697545   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:34.697599   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:34.730297   73900 cri.go:89] found id: ""
	I0930 21:11:34.730326   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.730334   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:34.730339   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:34.730390   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:34.762251   73900 cri.go:89] found id: ""
	I0930 21:11:34.762278   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.762286   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:34.762291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:34.762358   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:34.803028   73900 cri.go:89] found id: ""
	I0930 21:11:34.803058   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.803068   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:34.803074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:34.803122   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:34.840063   73900 cri.go:89] found id: ""
	I0930 21:11:34.840097   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.840110   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:34.840118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:34.840192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:34.878641   73900 cri.go:89] found id: ""
	I0930 21:11:34.878675   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.878686   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:34.878693   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:34.878745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:34.910799   73900 cri.go:89] found id: ""
	I0930 21:11:34.910823   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.910830   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:34.910837   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:34.910899   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:34.947748   73900 cri.go:89] found id: ""
	I0930 21:11:34.947782   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.947795   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:34.947806   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:34.947821   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:35.026490   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:35.026514   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:35.026529   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:35.115504   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:35.115559   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:35.158629   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:35.158659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:35.211011   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:35.211052   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:37.726260   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:37.739137   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:37.739222   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:34.568166   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:36.569720   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:39.069371   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:35.968574   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:38.467872   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:36.807057   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:38.807376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:37.779980   73900 cri.go:89] found id: ""
	I0930 21:11:37.780009   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.780018   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:37.780024   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:37.780076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:37.813936   73900 cri.go:89] found id: ""
	I0930 21:11:37.813961   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.813969   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:37.813975   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:37.814021   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:37.851150   73900 cri.go:89] found id: ""
	I0930 21:11:37.851176   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.851186   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:37.851193   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:37.851256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:37.891855   73900 cri.go:89] found id: ""
	I0930 21:11:37.891881   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.891889   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:37.891894   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:37.891943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:37.929234   73900 cri.go:89] found id: ""
	I0930 21:11:37.929269   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.929281   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:37.929288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:37.929359   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:37.962350   73900 cri.go:89] found id: ""
	I0930 21:11:37.962378   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.962386   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:37.962391   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:37.962441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:37.996727   73900 cri.go:89] found id: ""
	I0930 21:11:37.996752   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.996760   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:37.996765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:37.996819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:38.029959   73900 cri.go:89] found id: ""
	I0930 21:11:38.029991   73900 logs.go:276] 0 containers: []
	W0930 21:11:38.029999   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:38.030008   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:38.030019   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:38.079836   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:38.079875   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:38.093208   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:38.093236   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:38.168839   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:38.168862   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:38.168873   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:38.244747   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:38.244783   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:40.788841   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:40.802419   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:40.802491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:40.837138   73900 cri.go:89] found id: ""
	I0930 21:11:40.837175   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.837186   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:40.837193   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:40.837255   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:40.870947   73900 cri.go:89] found id: ""
	I0930 21:11:40.870977   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.870987   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:40.870993   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:40.871040   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:40.905004   73900 cri.go:89] found id: ""
	I0930 21:11:40.905033   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.905046   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:40.905053   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:40.905104   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:40.936909   73900 cri.go:89] found id: ""
	I0930 21:11:40.936937   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.936945   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:40.936952   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:40.937015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:40.972601   73900 cri.go:89] found id: ""
	I0930 21:11:40.972630   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.972641   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:40.972646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:40.972704   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:41.007539   73900 cri.go:89] found id: ""
	I0930 21:11:41.007583   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.007594   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:41.007602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:41.007661   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:41.042049   73900 cri.go:89] found id: ""
	I0930 21:11:41.042075   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.042084   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:41.042091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:41.042153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:41.075313   73900 cri.go:89] found id: ""
	I0930 21:11:41.075398   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.075414   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:41.075424   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:41.075440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:41.128683   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:41.128726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:41.142533   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:41.142560   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:41.210149   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:41.210176   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:41.210191   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:41.286547   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:41.286590   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:41.070042   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.570819   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:40.969912   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.468434   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:40.808294   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.307628   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.828902   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:43.842047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:43.842127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:43.876147   73900 cri.go:89] found id: ""
	I0930 21:11:43.876177   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.876187   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:43.876194   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:43.876287   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:43.916351   73900 cri.go:89] found id: ""
	I0930 21:11:43.916383   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.916394   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:43.916404   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:43.916457   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:43.948853   73900 cri.go:89] found id: ""
	I0930 21:11:43.948883   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.948894   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:43.948900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:43.948967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:43.983525   73900 cri.go:89] found id: ""
	I0930 21:11:43.983577   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.983589   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:43.983597   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:43.983656   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:44.021560   73900 cri.go:89] found id: ""
	I0930 21:11:44.021594   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.021606   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:44.021614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:44.021684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:44.057307   73900 cri.go:89] found id: ""
	I0930 21:11:44.057342   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.057353   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:44.057361   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:44.057418   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:44.091120   73900 cri.go:89] found id: ""
	I0930 21:11:44.091145   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.091155   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:44.091162   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:44.091223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:44.125781   73900 cri.go:89] found id: ""
	I0930 21:11:44.125808   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.125817   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:44.125827   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:44.125842   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:44.138699   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:44.138726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:44.208976   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:44.209009   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:44.209026   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:44.285552   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:44.285593   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:44.323412   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:44.323449   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:46.875210   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:46.888532   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:46.888596   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:46.921260   73900 cri.go:89] found id: ""
	I0930 21:11:46.921285   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.921293   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:46.921299   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:46.921357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:46.954645   73900 cri.go:89] found id: ""
	I0930 21:11:46.954675   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.954683   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:46.954688   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:46.954749   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:46.988424   73900 cri.go:89] found id: ""
	I0930 21:11:46.988457   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.988468   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:46.988475   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:46.988535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:47.022635   73900 cri.go:89] found id: ""
	I0930 21:11:47.022664   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.022675   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:47.022682   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:47.022744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:47.056497   73900 cri.go:89] found id: ""
	I0930 21:11:47.056523   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.056530   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:47.056536   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:47.056595   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:47.094983   73900 cri.go:89] found id: ""
	I0930 21:11:47.095011   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.095021   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:47.095028   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:47.095097   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:47.147567   73900 cri.go:89] found id: ""
	I0930 21:11:47.147595   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.147606   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:47.147613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:47.147692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:47.184878   73900 cri.go:89] found id: ""
	I0930 21:11:47.184908   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.184919   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:47.184930   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:47.184943   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:47.258581   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:47.258615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:47.303068   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:47.303100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:47.358749   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:47.358789   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:47.372492   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:47.372531   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:47.443984   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:46.069421   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:48.569013   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:45.968422   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:47.968876   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:45.808341   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:48.306627   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:49.944644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:49.958045   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:49.958124   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:49.993053   73900 cri.go:89] found id: ""
	I0930 21:11:49.993088   73900 logs.go:276] 0 containers: []
	W0930 21:11:49.993100   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:49.993107   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:49.993168   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:50.026171   73900 cri.go:89] found id: ""
	I0930 21:11:50.026197   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.026205   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:50.026210   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:50.026269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:50.060462   73900 cri.go:89] found id: ""
	I0930 21:11:50.060492   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.060502   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:50.060509   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:50.060567   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:50.095385   73900 cri.go:89] found id: ""
	I0930 21:11:50.095414   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.095425   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:50.095432   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:50.095507   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:50.127275   73900 cri.go:89] found id: ""
	I0930 21:11:50.127300   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.127308   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:50.127318   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:50.127378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:50.159810   73900 cri.go:89] found id: ""
	I0930 21:11:50.159836   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.159845   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:50.159850   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:50.159906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:50.191651   73900 cri.go:89] found id: ""
	I0930 21:11:50.191684   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.191695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:50.191702   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:50.191774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:50.225772   73900 cri.go:89] found id: ""
	I0930 21:11:50.225799   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.225809   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:50.225819   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:50.225837   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:50.310189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:50.310223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:50.348934   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:50.348965   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:50.400666   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:50.400703   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:50.415810   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:50.415843   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:50.483773   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:51.069928   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:53.070065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:50.469516   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.968367   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:54.968624   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:50.307903   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.807610   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.984701   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:52.997669   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:52.997745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:53.034012   73900 cri.go:89] found id: ""
	I0930 21:11:53.034044   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.034055   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:53.034063   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:53.034121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:53.068192   73900 cri.go:89] found id: ""
	I0930 21:11:53.068215   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.068222   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:53.068228   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:53.068285   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:53.104683   73900 cri.go:89] found id: ""
	I0930 21:11:53.104710   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.104719   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:53.104724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:53.104778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:53.138713   73900 cri.go:89] found id: ""
	I0930 21:11:53.138745   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.138753   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:53.138759   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:53.138814   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:53.173955   73900 cri.go:89] found id: ""
	I0930 21:11:53.173982   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.173994   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:53.174001   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:53.174060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:53.205942   73900 cri.go:89] found id: ""
	I0930 21:11:53.205970   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.205980   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:53.205987   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:53.206052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:53.241739   73900 cri.go:89] found id: ""
	I0930 21:11:53.241767   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.241776   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:53.241782   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:53.241832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:53.275328   73900 cri.go:89] found id: ""
	I0930 21:11:53.275363   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.275372   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:53.275381   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:53.275397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:53.313732   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:53.313761   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:53.364974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:53.365011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:53.377970   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:53.377999   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:53.445341   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:53.445370   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:53.445388   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.025958   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:56.038367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:56.038434   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:56.074721   73900 cri.go:89] found id: ""
	I0930 21:11:56.074756   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.074767   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:56.074781   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:56.074846   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:56.111491   73900 cri.go:89] found id: ""
	I0930 21:11:56.111525   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.111550   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:56.111572   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:56.111626   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:56.145660   73900 cri.go:89] found id: ""
	I0930 21:11:56.145690   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.145701   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:56.145708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:56.145769   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:56.180865   73900 cri.go:89] found id: ""
	I0930 21:11:56.180891   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.180901   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:56.180908   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:56.180971   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:56.213681   73900 cri.go:89] found id: ""
	I0930 21:11:56.213707   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.213716   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:56.213721   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:56.213772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:56.246683   73900 cri.go:89] found id: ""
	I0930 21:11:56.246711   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.246719   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:56.246724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:56.246774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:56.279651   73900 cri.go:89] found id: ""
	I0930 21:11:56.279679   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.279687   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:56.279692   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:56.279746   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:56.316701   73900 cri.go:89] found id: ""
	I0930 21:11:56.316727   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.316735   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:56.316743   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:56.316753   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:56.329879   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:56.329905   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:56.399919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:56.399949   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:56.399964   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.480200   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:56.480237   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:56.517755   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:56.517782   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:55.568782   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:58.068718   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:57.468492   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.968123   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:55.307809   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:57.308095   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.807355   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.070677   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:59.085884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:59.085956   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:59.119580   73900 cri.go:89] found id: ""
	I0930 21:11:59.119606   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.119615   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:59.119621   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:59.119667   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:59.152087   73900 cri.go:89] found id: ""
	I0930 21:11:59.152111   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.152120   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:59.152127   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:59.152172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:59.186177   73900 cri.go:89] found id: ""
	I0930 21:11:59.186205   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.186213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:59.186220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:59.186276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:59.218800   73900 cri.go:89] found id: ""
	I0930 21:11:59.218821   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.218829   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:59.218835   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:59.218893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:59.254335   73900 cri.go:89] found id: ""
	I0930 21:11:59.254361   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.254372   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:59.254378   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:59.254432   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:59.292406   73900 cri.go:89] found id: ""
	I0930 21:11:59.292441   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.292453   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:59.292460   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:59.292522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:59.333352   73900 cri.go:89] found id: ""
	I0930 21:11:59.333388   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.333399   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:59.333406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:59.333481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:59.377031   73900 cri.go:89] found id: ""
	I0930 21:11:59.377056   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.377064   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:59.377072   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:59.377084   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:59.392626   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:59.392655   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:59.473714   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:59.473741   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:59.473754   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:59.548895   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:59.548931   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:59.589007   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:59.589039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.139243   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:02.152335   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:02.152415   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:02.186942   73900 cri.go:89] found id: ""
	I0930 21:12:02.186980   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.186991   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:02.186999   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:02.187061   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:02.219738   73900 cri.go:89] found id: ""
	I0930 21:12:02.219759   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.219768   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:02.219773   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:02.219820   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:02.253667   73900 cri.go:89] found id: ""
	I0930 21:12:02.253698   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.253707   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:02.253712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:02.253760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:02.290078   73900 cri.go:89] found id: ""
	I0930 21:12:02.290105   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.290115   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:02.290122   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:02.290182   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:02.326408   73900 cri.go:89] found id: ""
	I0930 21:12:02.326436   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.326448   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:02.326455   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:02.326509   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:02.360608   73900 cri.go:89] found id: ""
	I0930 21:12:02.360641   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.360649   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:02.360655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:02.360714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:02.396140   73900 cri.go:89] found id: ""
	I0930 21:12:02.396166   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.396176   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:02.396182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:02.396236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:02.429905   73900 cri.go:89] found id: ""
	I0930 21:12:02.429947   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.429958   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:02.429968   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:02.429986   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:02.506600   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:02.506645   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:02.549325   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:02.549354   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.603614   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:02.603659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:02.618832   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:02.618859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:02.692491   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:00.070569   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:02.569436   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:01.968240   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:04.468583   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:02.306973   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:04.308182   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:05.193131   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:05.206133   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:05.206192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:05.238403   73900 cri.go:89] found id: ""
	I0930 21:12:05.238431   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.238439   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:05.238447   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:05.238523   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:05.271261   73900 cri.go:89] found id: ""
	I0930 21:12:05.271290   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.271303   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:05.271310   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:05.271378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:05.307718   73900 cri.go:89] found id: ""
	I0930 21:12:05.307749   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.307760   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:05.307767   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:05.307832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:05.341336   73900 cri.go:89] found id: ""
	I0930 21:12:05.341379   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.341390   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:05.341398   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:05.341461   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:05.374998   73900 cri.go:89] found id: ""
	I0930 21:12:05.375024   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.375032   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:05.375037   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:05.375085   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:05.410133   73900 cri.go:89] found id: ""
	I0930 21:12:05.410163   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.410174   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:05.410182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:05.410248   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:05.446197   73900 cri.go:89] found id: ""
	I0930 21:12:05.446227   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.446238   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:05.446246   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:05.446305   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:05.480638   73900 cri.go:89] found id: ""
	I0930 21:12:05.480667   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.480683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:05.480691   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:05.480702   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:05.532473   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:05.532512   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:05.547068   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:05.547096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:05.621444   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:05.621472   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:05.621487   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:05.707712   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:05.707767   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:05.068363   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:07.069531   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:06.969695   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:06.969727   73375 pod_ready.go:82] duration metric: took 4m0.008001407s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:06.969736   73375 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:06.969743   73375 pod_ready.go:39] duration metric: took 4m4.053054405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:06.969757   73375 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:12:06.969781   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:06.969835   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:07.024708   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:07.024730   73375 cri.go:89] found id: ""
	I0930 21:12:07.024737   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:07.024805   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.029375   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:07.029439   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:07.063656   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:07.063684   73375 cri.go:89] found id: ""
	I0930 21:12:07.063695   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:07.063754   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.068071   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:07.068126   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:07.102636   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:07.102665   73375 cri.go:89] found id: ""
	I0930 21:12:07.102675   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:07.102733   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.106711   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:07.106791   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:07.142676   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:07.142698   73375 cri.go:89] found id: ""
	I0930 21:12:07.142708   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:07.142766   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.146979   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:07.147041   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:07.189192   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:07.189223   73375 cri.go:89] found id: ""
	I0930 21:12:07.189232   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:07.189283   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.193408   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:07.193484   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:07.230538   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:07.230562   73375 cri.go:89] found id: ""
	I0930 21:12:07.230571   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:07.230630   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.235482   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:07.235573   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:07.274180   73375 cri.go:89] found id: ""
	I0930 21:12:07.274215   73375 logs.go:276] 0 containers: []
	W0930 21:12:07.274226   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:07.274233   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:07.274312   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:07.312851   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:07.312876   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:07.312882   73375 cri.go:89] found id: ""
	I0930 21:12:07.312890   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:07.312947   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.317386   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.321912   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:07.321940   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:07.361674   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:07.361701   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:07.398555   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:07.398615   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:07.432511   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:07.432540   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:07.919639   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:07.919678   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:07.935038   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:07.935067   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:08.059404   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:08.059435   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:08.114569   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:08.114605   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:08.153409   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:08.153447   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.193155   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:08.193187   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:08.260774   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:08.260814   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:08.351488   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:08.351519   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:08.387971   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:08.388012   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:06.805971   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:08.807886   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:08.248038   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:08.261409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:08.261485   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:08.305564   73900 cri.go:89] found id: ""
	I0930 21:12:08.305591   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.305601   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:08.305610   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:08.305669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:08.347816   73900 cri.go:89] found id: ""
	I0930 21:12:08.347844   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.347852   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:08.347858   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:08.347927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:08.381662   73900 cri.go:89] found id: ""
	I0930 21:12:08.381695   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.381705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:08.381712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:08.381829   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:08.427366   73900 cri.go:89] found id: ""
	I0930 21:12:08.427396   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.427406   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:08.427413   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:08.427476   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:08.463419   73900 cri.go:89] found id: ""
	I0930 21:12:08.463443   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.463451   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:08.463457   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:08.463508   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:08.496999   73900 cri.go:89] found id: ""
	I0930 21:12:08.497023   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.497033   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:08.497040   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:08.497098   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:08.530410   73900 cri.go:89] found id: ""
	I0930 21:12:08.530434   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.530442   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:08.530447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:08.530495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:08.563191   73900 cri.go:89] found id: ""
	I0930 21:12:08.563224   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.563235   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:08.563244   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:08.563258   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:08.640305   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:08.640341   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.676404   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:08.676431   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:08.729676   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:08.729736   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:08.743282   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:08.743310   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:08.811334   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.311643   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:11.329153   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:11.329229   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:11.369804   73900 cri.go:89] found id: ""
	I0930 21:12:11.369829   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.369838   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:11.369843   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:11.369896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:11.408530   73900 cri.go:89] found id: ""
	I0930 21:12:11.408558   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.408569   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:11.408580   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:11.408663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.446123   73900 cri.go:89] found id: ""
	I0930 21:12:11.446147   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.446155   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:11.446160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:11.446206   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:11.484019   73900 cri.go:89] found id: ""
	I0930 21:12:11.484044   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.484052   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:11.484057   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:11.484118   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:11.521934   73900 cri.go:89] found id: ""
	I0930 21:12:11.521961   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.521971   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:11.521979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:11.522042   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:11.561253   73900 cri.go:89] found id: ""
	I0930 21:12:11.561283   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.561293   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:11.561299   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:11.561352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:11.602610   73900 cri.go:89] found id: ""
	I0930 21:12:11.602637   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.602648   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:11.602655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:11.602760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:11.637146   73900 cri.go:89] found id: ""
	I0930 21:12:11.637174   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.637185   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:11.637194   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:11.637208   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:11.707627   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.707651   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:11.707668   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:11.786047   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:11.786091   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:11.827128   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:11.827157   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:11.885504   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:11.885542   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:09.569584   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:11.570031   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:14.068184   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:10.950921   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:10.967834   73375 api_server.go:72] duration metric: took 4m15.348038807s to wait for apiserver process to appear ...
	I0930 21:12:10.967876   73375 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:12:10.967922   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:10.967990   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:11.006632   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:11.006667   73375 cri.go:89] found id: ""
	I0930 21:12:11.006677   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:11.006738   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.010931   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:11.010994   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:11.045855   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:11.045882   73375 cri.go:89] found id: ""
	I0930 21:12:11.045893   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:11.045953   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.050058   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:11.050134   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.090954   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:11.090980   73375 cri.go:89] found id: ""
	I0930 21:12:11.090990   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:11.091041   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.095073   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:11.095150   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:11.137413   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:11.137448   73375 cri.go:89] found id: ""
	I0930 21:12:11.137458   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:11.137516   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.141559   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:11.141638   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:11.176921   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:11.176952   73375 cri.go:89] found id: ""
	I0930 21:12:11.176961   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:11.177010   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.181095   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:11.181158   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:11.215117   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:11.215141   73375 cri.go:89] found id: ""
	I0930 21:12:11.215148   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:11.215195   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.218947   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:11.219003   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:11.253901   73375 cri.go:89] found id: ""
	I0930 21:12:11.253937   73375 logs.go:276] 0 containers: []
	W0930 21:12:11.253948   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:11.253955   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:11.254010   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:11.293408   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:11.293434   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:11.293440   73375 cri.go:89] found id: ""
	I0930 21:12:11.293448   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:11.293562   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.297829   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.302572   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:11.302596   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:11.378000   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:11.378037   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:11.415382   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:11.415414   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:11.453703   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:11.453729   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:11.517749   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:11.517780   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:11.556543   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:11.556576   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:12.023270   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:12.023310   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:12.071138   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:12.071170   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:12.086915   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:12.086944   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:12.200046   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:12.200077   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:12.241447   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:12.241475   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:12.296574   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:12.296607   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:12.341982   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:12.342009   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:14.877590   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:12:14.882913   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0930 21:12:14.884088   73375 api_server.go:141] control plane version: v1.31.1
	I0930 21:12:14.884106   73375 api_server.go:131] duration metric: took 3.916223308s to wait for apiserver health ...
	I0930 21:12:14.884113   73375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:12:14.884134   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:14.884185   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:14.926932   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:14.926952   73375 cri.go:89] found id: ""
	I0930 21:12:14.926960   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:14.927003   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:14.931044   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:14.931106   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:14.967622   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:14.967645   73375 cri.go:89] found id: ""
	I0930 21:12:14.967652   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:14.967698   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:14.972152   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:14.972221   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.307501   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:13.307687   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:14.400848   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:14.413794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:14.413882   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:14.449799   73900 cri.go:89] found id: ""
	I0930 21:12:14.449830   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.449841   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:14.449849   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:14.449902   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:14.486301   73900 cri.go:89] found id: ""
	I0930 21:12:14.486330   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.486357   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:14.486365   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:14.486427   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:14.520451   73900 cri.go:89] found id: ""
	I0930 21:12:14.520479   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.520487   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:14.520497   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:14.520558   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:14.554056   73900 cri.go:89] found id: ""
	I0930 21:12:14.554095   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.554107   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:14.554114   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:14.554178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:14.594054   73900 cri.go:89] found id: ""
	I0930 21:12:14.594080   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.594088   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:14.594094   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:14.594142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:14.630225   73900 cri.go:89] found id: ""
	I0930 21:12:14.630255   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.630278   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:14.630284   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:14.630335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:14.663006   73900 cri.go:89] found id: ""
	I0930 21:12:14.663043   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.663054   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:14.663061   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:14.663119   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:14.699815   73900 cri.go:89] found id: ""
	I0930 21:12:14.699845   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.699858   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:14.699870   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:14.699886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:14.751465   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:14.751509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:14.766401   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:14.766432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:14.832979   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:14.833002   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:14.833016   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:14.918011   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:14.918051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.458886   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:17.471833   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:17.471918   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:17.505109   73900 cri.go:89] found id: ""
	I0930 21:12:17.505135   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.505145   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:17.505151   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:17.505213   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:17.538091   73900 cri.go:89] found id: ""
	I0930 21:12:17.538118   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.538129   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:17.538136   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:17.538308   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:17.571668   73900 cri.go:89] found id: ""
	I0930 21:12:17.571694   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.571705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:17.571712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:17.571770   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:17.607391   73900 cri.go:89] found id: ""
	I0930 21:12:17.607431   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.607442   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:17.607452   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:17.607519   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:17.643271   73900 cri.go:89] found id: ""
	I0930 21:12:17.643297   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.643305   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:17.643313   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:17.643382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:17.676653   73900 cri.go:89] found id: ""
	I0930 21:12:17.676687   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.676698   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:17.676708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:17.676772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:17.709570   73900 cri.go:89] found id: ""
	I0930 21:12:17.709602   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.709610   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:17.709615   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:17.709671   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:17.747857   73900 cri.go:89] found id: ""
	I0930 21:12:17.747883   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.747891   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:17.747902   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:17.747915   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:15.010874   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:15.010898   73375 cri.go:89] found id: ""
	I0930 21:12:15.010905   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:15.010947   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.015490   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:15.015582   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:15.051182   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:15.051210   73375 cri.go:89] found id: ""
	I0930 21:12:15.051220   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:15.051291   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.055057   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:15.055107   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:15.093126   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:15.093150   73375 cri.go:89] found id: ""
	I0930 21:12:15.093159   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:15.093214   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.097138   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:15.097200   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:15.131676   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:15.131704   73375 cri.go:89] found id: ""
	I0930 21:12:15.131716   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:15.131773   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.135550   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:15.135620   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:15.170579   73375 cri.go:89] found id: ""
	I0930 21:12:15.170604   73375 logs.go:276] 0 containers: []
	W0930 21:12:15.170612   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:15.170618   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:15.170672   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:15.205190   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:15.205216   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:15.205222   73375 cri.go:89] found id: ""
	I0930 21:12:15.205231   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:15.205287   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.209426   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.212981   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:15.213002   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:15.281543   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:15.281582   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:15.325855   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:15.325895   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:15.367382   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:15.367429   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:15.441395   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:15.441432   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:15.482487   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:15.482518   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:15.520298   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:15.520335   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:15.572596   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:15.572626   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:15.618087   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:15.618120   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:15.634125   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:15.634151   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:15.744355   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:15.744390   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:15.799312   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:15.799345   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:15.838934   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:15.838969   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:18.759947   73375 system_pods.go:59] 8 kube-system pods found
	I0930 21:12:18.759976   73375 system_pods.go:61] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running
	I0930 21:12:18.759981   73375 system_pods.go:61] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running
	I0930 21:12:18.759985   73375 system_pods.go:61] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running
	I0930 21:12:18.759989   73375 system_pods.go:61] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running
	I0930 21:12:18.759992   73375 system_pods.go:61] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:12:18.759995   73375 system_pods.go:61] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running
	I0930 21:12:18.760001   73375 system_pods.go:61] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:18.760006   73375 system_pods.go:61] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:12:18.760016   73375 system_pods.go:74] duration metric: took 3.875896906s to wait for pod list to return data ...
	I0930 21:12:18.760024   73375 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:12:18.762755   73375 default_sa.go:45] found service account: "default"
	I0930 21:12:18.762777   73375 default_sa.go:55] duration metric: took 2.746721ms for default service account to be created ...
	I0930 21:12:18.762787   73375 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:12:18.769060   73375 system_pods.go:86] 8 kube-system pods found
	I0930 21:12:18.769086   73375 system_pods.go:89] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running
	I0930 21:12:18.769091   73375 system_pods.go:89] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running
	I0930 21:12:18.769095   73375 system_pods.go:89] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running
	I0930 21:12:18.769099   73375 system_pods.go:89] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running
	I0930 21:12:18.769104   73375 system_pods.go:89] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:12:18.769107   73375 system_pods.go:89] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running
	I0930 21:12:18.769113   73375 system_pods.go:89] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:18.769129   73375 system_pods.go:89] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:12:18.769136   73375 system_pods.go:126] duration metric: took 6.344583ms to wait for k8s-apps to be running ...
	I0930 21:12:18.769144   73375 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:12:18.769183   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:18.785488   73375 system_svc.go:56] duration metric: took 16.335135ms WaitForService to wait for kubelet
	I0930 21:12:18.785544   73375 kubeadm.go:582] duration metric: took 4m23.165751441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:12:18.785572   73375 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:12:18.789308   73375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:12:18.789340   73375 node_conditions.go:123] node cpu capacity is 2
	I0930 21:12:18.789356   73375 node_conditions.go:105] duration metric: took 3.778609ms to run NodePressure ...
	I0930 21:12:18.789370   73375 start.go:241] waiting for startup goroutines ...
	I0930 21:12:18.789379   73375 start.go:246] waiting for cluster config update ...
	I0930 21:12:18.789394   73375 start.go:255] writing updated cluster config ...
	I0930 21:12:18.789688   73375 ssh_runner.go:195] Run: rm -f paused
	I0930 21:12:18.837384   73375 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:12:18.839699   73375 out.go:177] * Done! kubectl is now configured to use "no-preload-997816" cluster and "default" namespace by default
	I0930 21:12:16.070108   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:18.569568   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:15.308534   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:15.308581   73707 pod_ready.go:82] duration metric: took 4m0.007893146s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:15.308595   73707 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:15.308605   73707 pod_ready.go:39] duration metric: took 4m2.806797001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:15.308621   73707 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:12:15.308657   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:15.308722   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:15.353287   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:15.353348   73707 cri.go:89] found id: ""
	I0930 21:12:15.353359   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:15.353416   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.357602   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:15.357696   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:15.399289   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:15.399325   73707 cri.go:89] found id: ""
	I0930 21:12:15.399332   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:15.399377   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.404757   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:15.404832   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:15.454396   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:15.454423   73707 cri.go:89] found id: ""
	I0930 21:12:15.454433   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:15.454493   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.458660   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:15.458743   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:15.493941   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:15.493971   73707 cri.go:89] found id: ""
	I0930 21:12:15.493982   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:15.494055   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.498541   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:15.498628   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:15.535354   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:15.535385   73707 cri.go:89] found id: ""
	I0930 21:12:15.535395   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:15.535454   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.540097   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:15.540168   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:15.583969   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:15.583996   73707 cri.go:89] found id: ""
	I0930 21:12:15.584003   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:15.584051   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.589193   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:15.589260   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:15.629413   73707 cri.go:89] found id: ""
	I0930 21:12:15.629440   73707 logs.go:276] 0 containers: []
	W0930 21:12:15.629449   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:15.629454   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:15.629506   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:15.670129   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:15.670160   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:15.670166   73707 cri.go:89] found id: ""
	I0930 21:12:15.670175   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:15.670237   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.674227   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.678252   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:15.678276   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:15.758280   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:15.758319   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:15.778191   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:15.778222   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:15.930379   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:15.930422   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:15.966732   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:15.966759   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:16.004304   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:16.004337   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:16.043705   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:16.043733   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:16.600173   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:16.600210   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:16.651837   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:16.651868   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:16.695122   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:16.695155   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:16.737622   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:16.737671   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:16.772913   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:16.772944   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:16.808196   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:16.808224   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.368150   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:19.385771   73707 api_server.go:72] duration metric: took 4m14.101602019s to wait for apiserver process to appear ...
	I0930 21:12:19.385798   73707 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:12:19.385831   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:19.385889   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:19.421325   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:19.421354   73707 cri.go:89] found id: ""
	I0930 21:12:19.421364   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:19.421426   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.428045   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:19.428107   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:19.466034   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:19.466054   73707 cri.go:89] found id: ""
	I0930 21:12:19.466061   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:19.466102   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.470155   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:19.470222   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:19.504774   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:19.504799   73707 cri.go:89] found id: ""
	I0930 21:12:19.504806   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:19.504869   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.509044   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:19.509134   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:19.544204   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:19.544228   73707 cri.go:89] found id: ""
	I0930 21:12:19.544235   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:19.544293   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.549103   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:19.549194   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:19.591381   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:19.591416   73707 cri.go:89] found id: ""
	I0930 21:12:19.591425   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:19.591472   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.595522   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:19.595621   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:19.634816   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.634841   73707 cri.go:89] found id: ""
	I0930 21:12:19.634850   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:19.634894   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.639391   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:19.639450   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:19.675056   73707 cri.go:89] found id: ""
	I0930 21:12:19.675084   73707 logs.go:276] 0 containers: []
	W0930 21:12:19.675095   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:19.675102   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:19.675159   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:19.708641   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:19.708666   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:19.708672   73707 cri.go:89] found id: ""
	I0930 21:12:19.708682   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:19.708738   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.712636   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.716653   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:19.716680   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:19.785159   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:19.785203   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:19.823462   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:19.823490   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:19.856776   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:19.856808   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:19.893919   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:19.893948   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:19.930932   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:19.930978   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.988120   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:19.988164   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:20.027576   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:20.027618   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:20.041523   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:20.041557   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:20.157598   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:20.157630   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:20.213353   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:20.213384   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:20.254502   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:20.254533   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:17.824584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:17.824623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.862613   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:17.862643   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:17.915954   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:17.915992   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:17.929824   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:17.929853   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:17.999697   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:20.500449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:20.514042   73900 kubeadm.go:597] duration metric: took 4m1.91059878s to restartPrimaryControlPlane
	W0930 21:12:20.514119   73900 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 21:12:20.514158   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:12:21.675376   73900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.161176988s)
	I0930 21:12:21.675465   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:21.689467   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:12:21.698504   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:12:21.708418   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:12:21.708437   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:12:21.708483   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:12:21.716960   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:12:21.717019   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:12:21.727610   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:12:21.736212   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:12:21.736275   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:12:21.745512   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.754299   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:12:21.754366   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.763724   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:12:21.772521   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:12:21.772595   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:12:21.782980   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:12:21.850463   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:12:21.850558   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:12:21.991521   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:12:21.991706   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:12:21.991849   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:12:22.174876   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:12:22.177037   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:12:22.177155   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:12:22.177253   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:12:22.177379   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:12:22.178789   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:12:22.178860   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:12:22.178907   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:12:22.178961   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:12:22.179017   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:12:22.179139   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:12:22.179247   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:12:22.179310   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:12:22.179398   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:12:22.253256   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:12:22.661237   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:12:22.947987   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:12:23.170995   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:12:23.184583   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:12:23.185770   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:12:23.185813   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:12:23.334769   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:12:21.069777   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:23.070328   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:20.696951   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:20.696989   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:23.236734   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:12:23.241215   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 200:
	ok
	I0930 21:12:23.242629   73707 api_server.go:141] control plane version: v1.31.1
	I0930 21:12:23.242651   73707 api_server.go:131] duration metric: took 3.856847284s to wait for apiserver health ...
	I0930 21:12:23.242660   73707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:12:23.242680   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:23.242724   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:23.279601   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:23.279626   73707 cri.go:89] found id: ""
	I0930 21:12:23.279633   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:23.279692   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.283900   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:23.283977   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:23.320360   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:23.320397   73707 cri.go:89] found id: ""
	I0930 21:12:23.320410   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:23.320472   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.324745   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:23.324825   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:23.368001   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:23.368024   73707 cri.go:89] found id: ""
	I0930 21:12:23.368034   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:23.368095   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.372001   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:23.372077   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:23.408203   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:23.408234   73707 cri.go:89] found id: ""
	I0930 21:12:23.408242   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:23.408299   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.412328   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:23.412397   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:23.462142   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:23.462173   73707 cri.go:89] found id: ""
	I0930 21:12:23.462183   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:23.462247   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.466257   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:23.466336   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:23.509075   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:23.509098   73707 cri.go:89] found id: ""
	I0930 21:12:23.509109   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:23.509169   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.513362   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:23.513441   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:23.553711   73707 cri.go:89] found id: ""
	I0930 21:12:23.553738   73707 logs.go:276] 0 containers: []
	W0930 21:12:23.553746   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:23.553752   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:23.553797   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:23.599596   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:23.599629   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:23.599635   73707 cri.go:89] found id: ""
	I0930 21:12:23.599644   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:23.599699   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.603589   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.607827   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:23.607855   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:23.621046   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:23.621069   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:23.664703   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:23.664735   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:23.700614   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:23.700644   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:23.738113   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:23.738143   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:23.775706   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:23.775733   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:23.840419   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:23.840454   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:23.876827   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:23.876860   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:23.943636   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:23.943675   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:24.052729   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:24.052763   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:24.106526   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:24.106556   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:24.146914   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:24.146941   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:24.527753   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:24.527804   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:27.077689   73707 system_pods.go:59] 8 kube-system pods found
	I0930 21:12:27.077721   73707 system_pods.go:61] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running
	I0930 21:12:27.077726   73707 system_pods.go:61] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running
	I0930 21:12:27.077731   73707 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running
	I0930 21:12:27.077739   73707 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running
	I0930 21:12:27.077744   73707 system_pods.go:61] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running
	I0930 21:12:27.077749   73707 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running
	I0930 21:12:27.077759   73707 system_pods.go:61] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:27.077766   73707 system_pods.go:61] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running
	I0930 21:12:27.077774   73707 system_pods.go:74] duration metric: took 3.835107861s to wait for pod list to return data ...
	I0930 21:12:27.077783   73707 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:12:27.082269   73707 default_sa.go:45] found service account: "default"
	I0930 21:12:27.082292   73707 default_sa.go:55] duration metric: took 4.502111ms for default service account to be created ...
	I0930 21:12:27.082299   73707 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:12:27.086738   73707 system_pods.go:86] 8 kube-system pods found
	I0930 21:12:27.086764   73707 system_pods.go:89] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running
	I0930 21:12:27.086770   73707 system_pods.go:89] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running
	I0930 21:12:27.086775   73707 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running
	I0930 21:12:27.086781   73707 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running
	I0930 21:12:27.086784   73707 system_pods.go:89] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running
	I0930 21:12:27.086788   73707 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running
	I0930 21:12:27.086796   73707 system_pods.go:89] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:27.086803   73707 system_pods.go:89] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running
	I0930 21:12:27.086811   73707 system_pods.go:126] duration metric: took 4.506701ms to wait for k8s-apps to be running ...
	I0930 21:12:27.086820   73707 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:12:27.086868   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:27.102286   73707 system_svc.go:56] duration metric: took 15.455734ms WaitForService to wait for kubelet
	I0930 21:12:27.102325   73707 kubeadm.go:582] duration metric: took 4m21.818162682s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:12:27.102346   73707 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:12:27.105332   73707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:12:27.105354   73707 node_conditions.go:123] node cpu capacity is 2
	I0930 21:12:27.105364   73707 node_conditions.go:105] duration metric: took 3.013328ms to run NodePressure ...
	I0930 21:12:27.105375   73707 start.go:241] waiting for startup goroutines ...
	I0930 21:12:27.105382   73707 start.go:246] waiting for cluster config update ...
	I0930 21:12:27.105393   73707 start.go:255] writing updated cluster config ...
	I0930 21:12:27.105669   73707 ssh_runner.go:195] Run: rm -f paused
	I0930 21:12:27.156804   73707 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:12:27.158887   73707 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-291511" cluster and "default" namespace by default
	I0930 21:12:23.336604   73900 out.go:235]   - Booting up control plane ...
	I0930 21:12:23.336747   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:12:23.345737   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:12:23.346784   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:12:23.347559   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:12:23.351009   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:12:25.568654   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:27.569042   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:29.570978   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:32.069065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:34.069347   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:36.568228   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:38.569351   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:40.569552   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:43.069456   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:45.569254   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:47.569647   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:49.569997   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:52.069284   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:54.069870   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:54.563572   73256 pod_ready.go:82] duration metric: took 4m0.000782781s for pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:54.563605   73256 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:54.563620   73256 pod_ready.go:39] duration metric: took 4m9.49309261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:54.563643   73256 kubeadm.go:597] duration metric: took 4m18.399318281s to restartPrimaryControlPlane
	W0930 21:12:54.563698   73256 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 21:12:54.563721   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:13:03.351822   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:13:03.352632   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:03.352833   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:08.353230   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:08.353429   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:20.634441   73256 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.070691776s)
	I0930 21:13:20.634529   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:13:20.650312   73256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:13:20.661782   73256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:13:20.671436   73256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:13:20.671463   73256 kubeadm.go:157] found existing configuration files:
	
	I0930 21:13:20.671504   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:13:20.681860   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:13:20.681934   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:13:20.692529   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:13:20.701507   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:13:20.701585   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:13:20.711211   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:13:20.721856   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:13:20.721928   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:13:20.733194   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:13:20.743887   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:13:20.743955   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:13:20.753546   73256 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:13:20.799739   73256 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 21:13:20.799812   73256 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:13:20.906464   73256 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:13:20.906569   73256 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:13:20.906647   73256 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 21:13:20.919451   73256 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:13:20.921440   73256 out.go:235]   - Generating certificates and keys ...
	I0930 21:13:20.921550   73256 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:13:20.921645   73256 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:13:20.921758   73256 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:13:20.921845   73256 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:13:20.921945   73256 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:13:20.922021   73256 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:13:20.922117   73256 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:13:20.922190   73256 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:13:20.922262   73256 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:13:20.922336   73256 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:13:20.922370   73256 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:13:20.922459   73256 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:13:21.079731   73256 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:13:21.214199   73256 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 21:13:21.344405   73256 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:13:21.605006   73256 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:13:21.718432   73256 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:13:21.718967   73256 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:13:21.723434   73256 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:13:18.354150   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:18.354468   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:21.725304   73256 out.go:235]   - Booting up control plane ...
	I0930 21:13:21.725435   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:13:21.725526   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:13:21.725637   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:13:21.743582   73256 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:13:21.749533   73256 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:13:21.749605   73256 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:13:21.873716   73256 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 21:13:21.873867   73256 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 21:13:22.375977   73256 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.402537ms
	I0930 21:13:22.376098   73256 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 21:13:27.379510   73256 kubeadm.go:310] [api-check] The API server is healthy after 5.001265494s
	I0930 21:13:27.392047   73256 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 21:13:27.409550   73256 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 21:13:27.447693   73256 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 21:13:27.447896   73256 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-256103 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 21:13:27.462338   73256 kubeadm.go:310] [bootstrap-token] Using token: k5ffj3.6sqmy7prwrlhrg7s
	I0930 21:13:27.463967   73256 out.go:235]   - Configuring RBAC rules ...
	I0930 21:13:27.464076   73256 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 21:13:27.472107   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 21:13:27.481172   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 21:13:27.485288   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 21:13:27.492469   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 21:13:27.496822   73256 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 21:13:27.789372   73256 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 21:13:28.210679   73256 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 21:13:28.784869   73256 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 21:13:28.785859   73256 kubeadm.go:310] 
	I0930 21:13:28.785954   73256 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 21:13:28.785967   73256 kubeadm.go:310] 
	I0930 21:13:28.786045   73256 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 21:13:28.786077   73256 kubeadm.go:310] 
	I0930 21:13:28.786121   73256 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 21:13:28.786219   73256 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 21:13:28.786286   73256 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 21:13:28.786304   73256 kubeadm.go:310] 
	I0930 21:13:28.786395   73256 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 21:13:28.786405   73256 kubeadm.go:310] 
	I0930 21:13:28.786464   73256 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 21:13:28.786474   73256 kubeadm.go:310] 
	I0930 21:13:28.786546   73256 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 21:13:28.786658   73256 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 21:13:28.786754   73256 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 21:13:28.786763   73256 kubeadm.go:310] 
	I0930 21:13:28.786870   73256 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 21:13:28.786991   73256 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 21:13:28.787000   73256 kubeadm.go:310] 
	I0930 21:13:28.787122   73256 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k5ffj3.6sqmy7prwrlhrg7s \
	I0930 21:13:28.787240   73256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 21:13:28.787274   73256 kubeadm.go:310] 	--control-plane 
	I0930 21:13:28.787290   73256 kubeadm.go:310] 
	I0930 21:13:28.787415   73256 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 21:13:28.787425   73256 kubeadm.go:310] 
	I0930 21:13:28.787547   73256 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k5ffj3.6sqmy7prwrlhrg7s \
	I0930 21:13:28.787713   73256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 21:13:28.788805   73256 kubeadm.go:310] W0930 21:13:20.776526    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:13:28.789058   73256 kubeadm.go:310] W0930 21:13:20.777323    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:13:28.789158   73256 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:13:28.789178   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:13:28.789187   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:13:28.791049   73256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:13:28.792381   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:13:28.802872   73256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:13:28.819952   73256 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:13:28.820054   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:28.820070   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-256103 minikube.k8s.io/updated_at=2024_09_30T21_13_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=embed-certs-256103 minikube.k8s.io/primary=true
	I0930 21:13:28.859770   73256 ops.go:34] apiserver oom_adj: -16
	I0930 21:13:29.026274   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:29.526992   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:30.026700   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:30.526962   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:31.027165   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:31.526632   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:32.027019   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:32.526522   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:33.026739   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:33.116028   73256 kubeadm.go:1113] duration metric: took 4.296036786s to wait for elevateKubeSystemPrivileges
	I0930 21:13:33.116067   73256 kubeadm.go:394] duration metric: took 4m57.005787187s to StartCluster
	I0930 21:13:33.116088   73256 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:13:33.116175   73256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:13:33.117855   73256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:13:33.118142   73256 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:13:33.118263   73256 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:13:33.118420   73256 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-256103"
	I0930 21:13:33.118373   73256 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:13:33.118446   73256 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-256103"
	I0930 21:13:33.118442   73256 addons.go:69] Setting default-storageclass=true in profile "embed-certs-256103"
	W0930 21:13:33.118453   73256 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:13:33.118464   73256 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-256103"
	I0930 21:13:33.118482   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.118515   73256 addons.go:69] Setting metrics-server=true in profile "embed-certs-256103"
	I0930 21:13:33.118554   73256 addons.go:234] Setting addon metrics-server=true in "embed-certs-256103"
	W0930 21:13:33.118564   73256 addons.go:243] addon metrics-server should already be in state true
	I0930 21:13:33.118594   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.118807   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118840   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.118880   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118926   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.118941   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118965   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.120042   73256 out.go:177] * Verifying Kubernetes components...
	I0930 21:13:33.121706   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:13:33.136554   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0930 21:13:33.137096   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.137304   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0930 21:13:33.137664   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.137696   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.137789   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.138013   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.138176   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.138317   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.138336   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.139163   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37389
	I0930 21:13:33.139176   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.139733   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.139903   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.139955   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.140284   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.140311   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.140780   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.141336   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.141375   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.141814   73256 addons.go:234] Setting addon default-storageclass=true in "embed-certs-256103"
	W0930 21:13:33.141832   73256 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:13:33.141857   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.142143   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.142177   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.161937   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0930 21:13:33.162096   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33657
	I0930 21:13:33.162249   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42531
	I0930 21:13:33.162491   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.162536   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.162837   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.163017   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163028   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163030   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163045   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163254   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163265   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163362   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.163417   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.163864   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.163899   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.164101   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.164154   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.164356   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.166460   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.166673   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.168464   73256 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:13:33.168631   73256 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:13:33.169822   73256 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:13:33.169840   73256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:13:33.169857   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.169937   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:13:33.169947   73256 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:13:33.169963   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.174613   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.174653   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175236   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.175265   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175372   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.175405   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175667   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.176048   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.176051   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.176299   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.176299   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.176476   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.176684   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.176685   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.180520   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I0930 21:13:33.180968   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.181564   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.181588   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.181938   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.182136   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.183803   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.184001   73256 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:13:33.184017   73256 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:13:33.184035   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.186565   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.186964   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.186996   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.187311   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.187481   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.187797   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.187937   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.337289   73256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:13:33.360186   73256 node_ready.go:35] waiting up to 6m0s for node "embed-certs-256103" to be "Ready" ...
	I0930 21:13:33.372799   73256 node_ready.go:49] node "embed-certs-256103" has status "Ready":"True"
	I0930 21:13:33.372828   73256 node_ready.go:38] duration metric: took 12.601736ms for node "embed-certs-256103" to be "Ready" ...
	I0930 21:13:33.372837   73256 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:13:33.379694   73256 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:33.462144   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:13:33.500072   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:13:33.500102   73256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:13:33.524789   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:13:33.548931   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:13:33.548955   73256 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:13:33.604655   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:13:33.604682   73256 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:13:33.648687   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:13:34.533493   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.008666954s)
	I0930 21:13:34.533555   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.533566   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.533856   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.533870   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.533884   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.533892   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.533900   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.534108   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.534126   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.534149   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.535651   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.073475648s)
	I0930 21:13:34.535695   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.535706   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.535926   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.536001   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.536014   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.536030   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.535981   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.537450   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.537470   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.537480   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.564363   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.564394   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.564715   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.564739   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968266   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.319532564s)
	I0930 21:13:34.968330   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.968350   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.968642   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.968665   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968674   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.968673   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.968681   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.968944   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.968969   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968973   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.968979   73256 addons.go:475] Verifying addon metrics-server=true in "embed-certs-256103"
	I0930 21:13:34.970656   73256 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0930 21:13:34.971966   73256 addons.go:510] duration metric: took 1.853709741s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0930 21:13:35.387687   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:37.388374   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:39.886425   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:41.885713   73256 pod_ready.go:93] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.885737   73256 pod_ready.go:82] duration metric: took 8.506004979s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.885746   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.891032   73256 pod_ready.go:93] pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.891052   73256 pod_ready.go:82] duration metric: took 5.300379ms for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.891061   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.895332   73256 pod_ready.go:93] pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.895349   73256 pod_ready.go:82] duration metric: took 4.282199ms for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.895357   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-glbsg" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.899518   73256 pod_ready.go:93] pod "kube-proxy-glbsg" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.899556   73256 pod_ready.go:82] duration metric: took 4.191815ms for pod "kube-proxy-glbsg" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.899567   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.904184   73256 pod_ready.go:93] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.904203   73256 pod_ready.go:82] duration metric: took 4.628533ms for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.904209   73256 pod_ready.go:39] duration metric: took 8.531361398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:13:41.904221   73256 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:13:41.904262   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:13:41.919570   73256 api_server.go:72] duration metric: took 8.801387692s to wait for apiserver process to appear ...
	I0930 21:13:41.919591   73256 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:13:41.919607   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:13:41.923810   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I0930 21:13:41.924633   73256 api_server.go:141] control plane version: v1.31.1
	I0930 21:13:41.924651   73256 api_server.go:131] duration metric: took 5.054857ms to wait for apiserver health ...
	I0930 21:13:41.924659   73256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:13:42.086431   73256 system_pods.go:59] 9 kube-system pods found
	I0930 21:13:42.086468   73256 system_pods.go:61] "coredns-7c65d6cfc9-gt5tt" [165faaf0-866c-4097-9bdb-ed58fe8d7395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.086480   73256 system_pods.go:61] "coredns-7c65d6cfc9-sgsbn" [c97fdb50-c6a0-4ef8-8c01-ea45ed18b72a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.086488   73256 system_pods.go:61] "etcd-embed-certs-256103" [6aac0706-7dbd-4655-b261-68877299d81a] Running
	I0930 21:13:42.086494   73256 system_pods.go:61] "kube-apiserver-embed-certs-256103" [6c8e3157-ec97-4a85-8947-ca7541c19b1c] Running
	I0930 21:13:42.086500   73256 system_pods.go:61] "kube-controller-manager-embed-certs-256103" [1e3f76d1-d343-4127-aad9-8a5a8e589a43] Running
	I0930 21:13:42.086505   73256 system_pods.go:61] "kube-proxy-glbsg" [f68e378f-ce0f-4603-bd8e-93334f04f7a7] Running
	I0930 21:13:42.086510   73256 system_pods.go:61] "kube-scheduler-embed-certs-256103" [29f55c6f-9603-4cd2-a798-0ff2362b7607] Running
	I0930 21:13:42.086518   73256 system_pods.go:61] "metrics-server-6867b74b74-5mhkh" [470424ec-bb66-4d62-904d-0d4ad93fa5bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:13:42.086525   73256 system_pods.go:61] "storage-provisioner" [a07a5a12-7420-4b57-b79d-982f4bb48232] Running
	I0930 21:13:42.086538   73256 system_pods.go:74] duration metric: took 161.870121ms to wait for pod list to return data ...
	I0930 21:13:42.086559   73256 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:13:42.284282   73256 default_sa.go:45] found service account: "default"
	I0930 21:13:42.284307   73256 default_sa.go:55] duration metric: took 197.73827ms for default service account to be created ...
	I0930 21:13:42.284316   73256 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:13:42.486445   73256 system_pods.go:86] 9 kube-system pods found
	I0930 21:13:42.486478   73256 system_pods.go:89] "coredns-7c65d6cfc9-gt5tt" [165faaf0-866c-4097-9bdb-ed58fe8d7395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.486489   73256 system_pods.go:89] "coredns-7c65d6cfc9-sgsbn" [c97fdb50-c6a0-4ef8-8c01-ea45ed18b72a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.486497   73256 system_pods.go:89] "etcd-embed-certs-256103" [6aac0706-7dbd-4655-b261-68877299d81a] Running
	I0930 21:13:42.486503   73256 system_pods.go:89] "kube-apiserver-embed-certs-256103" [6c8e3157-ec97-4a85-8947-ca7541c19b1c] Running
	I0930 21:13:42.486509   73256 system_pods.go:89] "kube-controller-manager-embed-certs-256103" [1e3f76d1-d343-4127-aad9-8a5a8e589a43] Running
	I0930 21:13:42.486513   73256 system_pods.go:89] "kube-proxy-glbsg" [f68e378f-ce0f-4603-bd8e-93334f04f7a7] Running
	I0930 21:13:42.486518   73256 system_pods.go:89] "kube-scheduler-embed-certs-256103" [29f55c6f-9603-4cd2-a798-0ff2362b7607] Running
	I0930 21:13:42.486526   73256 system_pods.go:89] "metrics-server-6867b74b74-5mhkh" [470424ec-bb66-4d62-904d-0d4ad93fa5bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:13:42.486533   73256 system_pods.go:89] "storage-provisioner" [a07a5a12-7420-4b57-b79d-982f4bb48232] Running
	I0930 21:13:42.486542   73256 system_pods.go:126] duration metric: took 202.220435ms to wait for k8s-apps to be running ...
	I0930 21:13:42.486552   73256 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:13:42.486601   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:13:42.501286   73256 system_svc.go:56] duration metric: took 14.699273ms WaitForService to wait for kubelet
	I0930 21:13:42.501315   73256 kubeadm.go:582] duration metric: took 9.38313627s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:13:42.501332   73256 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:13:42.685282   73256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:13:42.685314   73256 node_conditions.go:123] node cpu capacity is 2
	I0930 21:13:42.685326   73256 node_conditions.go:105] duration metric: took 183.989963ms to run NodePressure ...
	I0930 21:13:42.685346   73256 start.go:241] waiting for startup goroutines ...
	I0930 21:13:42.685356   73256 start.go:246] waiting for cluster config update ...
	I0930 21:13:42.685371   73256 start.go:255] writing updated cluster config ...
	I0930 21:13:42.685664   73256 ssh_runner.go:195] Run: rm -f paused
	I0930 21:13:42.734778   73256 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:13:42.736658   73256 out.go:177] * Done! kubectl is now configured to use "embed-certs-256103" cluster and "default" namespace by default
	I0930 21:13:38.355123   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:38.355330   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357098   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:14:18.357396   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357419   73900 kubeadm.go:310] 
	I0930 21:14:18.357473   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:14:18.357541   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:14:18.357554   73900 kubeadm.go:310] 
	I0930 21:14:18.357609   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:14:18.357659   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:14:18.357801   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:14:18.357817   73900 kubeadm.go:310] 
	I0930 21:14:18.357964   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:14:18.357996   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:14:18.358028   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:14:18.358039   73900 kubeadm.go:310] 
	I0930 21:14:18.358174   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:14:18.358318   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:14:18.358331   73900 kubeadm.go:310] 
	I0930 21:14:18.358510   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:14:18.358646   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:14:18.358764   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:14:18.358866   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:14:18.358882   73900 kubeadm.go:310] 
	I0930 21:14:18.359454   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:14:18.359595   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:14:18.359681   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0930 21:14:18.359797   73900 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0930 21:14:18.359841   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:14:18.820244   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:14:18.834938   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:14:18.844779   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:14:18.844803   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:14:18.844856   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:14:18.853738   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:14:18.853811   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:14:18.863366   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:14:18.872108   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:14:18.872164   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:14:18.881818   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.890916   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:14:18.890969   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.900075   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:14:18.908449   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:14:18.908520   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:14:18.917163   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:14:18.983181   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:14:18.983233   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:14:19.121356   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:14:19.121545   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:14:19.121674   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:14:19.306639   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:14:19.309593   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:14:19.309683   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:14:19.309748   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:14:19.309870   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:14:19.309957   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:14:19.310040   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:14:19.310119   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:14:19.310209   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:14:19.310292   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:14:19.310404   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:14:19.310511   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:14:19.310567   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:14:19.310654   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:14:19.453872   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:14:19.621232   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:14:19.797694   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:14:19.886897   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:14:19.909016   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:14:19.910536   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:14:19.910617   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:14:20.052878   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:14:20.054739   73900 out.go:235]   - Booting up control plane ...
	I0930 21:14:20.054881   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:14:20.068419   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:14:20.068512   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:14:20.068697   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:14:20.072015   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:15:00.073988   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:15:00.074795   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:00.075068   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:05.075810   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:05.076061   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:15.076695   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:15.076928   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:35.077652   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:35.077862   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.076816   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:16:15.077063   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.077082   73900 kubeadm.go:310] 
	I0930 21:16:15.077136   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:16:15.077188   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:16:15.077198   73900 kubeadm.go:310] 
	I0930 21:16:15.077246   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:16:15.077298   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:16:15.077425   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:16:15.077442   73900 kubeadm.go:310] 
	I0930 21:16:15.077605   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:16:15.077651   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:16:15.077710   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:16:15.077718   73900 kubeadm.go:310] 
	I0930 21:16:15.077851   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:16:15.077997   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:16:15.078013   73900 kubeadm.go:310] 
	I0930 21:16:15.078143   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:16:15.078229   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:16:15.078309   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:16:15.078419   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:16:15.078431   73900 kubeadm.go:310] 
	I0930 21:16:15.079235   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:16:15.079365   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:16:15.079442   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0930 21:16:15.079572   73900 kubeadm.go:394] duration metric: took 7m56.529269567s to StartCluster
	I0930 21:16:15.079639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:16:15.079713   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:16:15.122057   73900 cri.go:89] found id: ""
	I0930 21:16:15.122086   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.122098   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:16:15.122105   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:16:15.122166   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:16:15.156244   73900 cri.go:89] found id: ""
	I0930 21:16:15.156278   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.156289   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:16:15.156297   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:16:15.156357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:16:15.188952   73900 cri.go:89] found id: ""
	I0930 21:16:15.188977   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.188989   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:16:15.188996   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:16:15.189058   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:16:15.219400   73900 cri.go:89] found id: ""
	I0930 21:16:15.219427   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.219435   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:16:15.219441   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:16:15.219501   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:16:15.252049   73900 cri.go:89] found id: ""
	I0930 21:16:15.252078   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.252086   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:16:15.252093   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:16:15.252150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:16:15.286560   73900 cri.go:89] found id: ""
	I0930 21:16:15.286594   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.286605   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:16:15.286614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:16:15.286679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:16:15.319140   73900 cri.go:89] found id: ""
	I0930 21:16:15.319178   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.319187   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:16:15.319192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:16:15.319245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:16:15.351299   73900 cri.go:89] found id: ""
	I0930 21:16:15.351322   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.351330   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:16:15.351339   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:16:15.351350   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:16:15.402837   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:16:15.402882   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:16:15.417111   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:16:15.417140   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:16:15.492593   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:16:15.492614   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:16:15.492627   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:16:15.621646   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:16:15.621681   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0930 21:16:15.660480   73900 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0930 21:16:15.660528   73900 out.go:270] * 
	W0930 21:16:15.660580   73900 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.660595   73900 out.go:270] * 
	W0930 21:16:15.661387   73900 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 21:16:15.665510   73900 out.go:201] 
	W0930 21:16:15.667332   73900 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.667373   73900 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0930 21:16:15.667390   73900 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0930 21:16:15.668812   73900 out.go:201] 
	
	
	==> CRI-O <==
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.551875021Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727730977551851054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06a07b0c-4214-4f00-b65b-75cde5f80c81 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.552342475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52855196-1ba7-4cb3-bfaf-4882f163fe6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.552406118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52855196-1ba7-4cb3-bfaf-4882f163fe6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.552443174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=52855196-1ba7-4cb3-bfaf-4882f163fe6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.583739284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e193567-8ef9-4c2f-a38a-2a30feaef99f name=/runtime.v1.RuntimeService/Version
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.583809939Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e193567-8ef9-4c2f-a38a-2a30feaef99f name=/runtime.v1.RuntimeService/Version
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.584698509Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b9eee33-6711-4f6c-9a9b-6d7cfe2e42e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.585092951Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727730977585069642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b9eee33-6711-4f6c-9a9b-6d7cfe2e42e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.585516447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a7f7224-2768-494b-beda-b4c2f23aa11e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.585565111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a7f7224-2768-494b-beda-b4c2f23aa11e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.585594540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3a7f7224-2768-494b-beda-b4c2f23aa11e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.616836892Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c29c25bc-81b0-46ba-8dbe-2f8f5a9b3029 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.616908858Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c29c25bc-81b0-46ba-8dbe-2f8f5a9b3029 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.617707956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9d91252-ab1a-4da7-9c4c-4ff717fb988f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.618143816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727730977618122703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9d91252-ab1a-4da7-9c4c-4ff717fb988f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.618818764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be4e7795-109c-42a8-93ad-12924dce7348 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.618878689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be4e7795-109c-42a8-93ad-12924dce7348 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.618940937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=be4e7795-109c-42a8-93ad-12924dce7348 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.650224425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0d1ebf9-1de2-4576-8722-7751b829c363 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.650299778Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0d1ebf9-1de2-4576-8722-7751b829c363 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.651755227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58dffc0a-ba56-4d82-8442-c8c728cd6d07 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.652153128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727730977652129650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58dffc0a-ba56-4d82-8442-c8c728cd6d07 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.652713079Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f3e4fde-5610-4384-b8ba-5abb62eb8ad7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.652761538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f3e4fde-5610-4384-b8ba-5abb62eb8ad7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:16:17 old-k8s-version-621406 crio[636]: time="2024-09-30 21:16:17.652790738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6f3e4fde-5610-4384-b8ba-5abb62eb8ad7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep30 21:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055405] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042801] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.194174] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep30 21:08] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.574996] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.760000] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.059497] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069559] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.192698] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.144274] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.303445] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +6.753345] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.065939] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.694211] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[ +12.297674] kauditd_printk_skb: 46 callbacks suppressed
	[Sep30 21:12] systemd-fstab-generator[5042]: Ignoring "noauto" option for root device
	[Sep30 21:14] systemd-fstab-generator[5322]: Ignoring "noauto" option for root device
	[  +0.065961] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:16:17 up 8 min,  0 users,  load average: 0.05, 0.08, 0.05
	Linux old-k8s-version-621406 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0009b3740, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000994b40, 0x24, 0x0, ...)
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]: net.(*Dialer).DialContext(0xc00023db00, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000994b40, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0008e6080, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000994b40, 0x24, 0x60, 0x7fa8572dffe8, 0x118, ...)
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]: net/http.(*Transport).dial(0xc00080d680, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000994b40, 0x24, 0x0, 0x14, 0x5, ...)
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]: net/http.(*Transport).dialConn(0xc00080d680, 0x4f7fe00, 0xc000052030, 0x0, 0xc0009ca300, 0x5, 0xc000994b40, 0x24, 0x0, 0xc00098d680, ...)
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]: net/http.(*Transport).dialConnFor(0xc00080d680, 0xc0009926e0)
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]: created by net/http.(*Transport).queueForDial
	Sep 30 21:16:14 old-k8s-version-621406 kubelet[5502]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Sep 30 21:16:14 old-k8s-version-621406 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 30 21:16:14 old-k8s-version-621406 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 30 21:16:15 old-k8s-version-621406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 30 21:16:15 old-k8s-version-621406 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 30 21:16:15 old-k8s-version-621406 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 30 21:16:15 old-k8s-version-621406 kubelet[5562]: I0930 21:16:15.597931    5562 server.go:416] Version: v1.20.0
	Sep 30 21:16:15 old-k8s-version-621406 kubelet[5562]: I0930 21:16:15.598326    5562 server.go:837] Client rotation is on, will bootstrap in background
	Sep 30 21:16:15 old-k8s-version-621406 kubelet[5562]: I0930 21:16:15.600466    5562 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 30 21:16:15 old-k8s-version-621406 kubelet[5562]: W0930 21:16:15.601379    5562 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 30 21:16:15 old-k8s-version-621406 kubelet[5562]: I0930 21:16:15.601483    5562 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-621406 -n old-k8s-version-621406
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-621406 -n old-k8s-version-621406: exit status 2 (216.972726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-621406" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (756.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997816 -n no-preload-997816
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-30 21:21:19.380848152 +0000 UTC m=+6200.085604283
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997816 -n no-preload-997816
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-997816 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-997816 logs -n 25: (2.128688487s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo find                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo crio                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-207733                                      | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-741890 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | disable-driver-mounts-741890                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 21:00 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-256103            | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-997816             | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-291511  | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-621406        | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256103                 | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-997816                  | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-291511       | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:12 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-621406             | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 21:03:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 21:03:42.750102   73900 out.go:345] Setting OutFile to fd 1 ...
	I0930 21:03:42.750367   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750377   73900 out.go:358] Setting ErrFile to fd 2...
	I0930 21:03:42.750383   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750578   73900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 21:03:42.751109   73900 out.go:352] Setting JSON to false
	I0930 21:03:42.752040   73900 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6366,"bootTime":1727723857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 21:03:42.752140   73900 start.go:139] virtualization: kvm guest
	I0930 21:03:42.754146   73900 out.go:177] * [old-k8s-version-621406] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 21:03:42.755446   73900 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 21:03:42.755456   73900 notify.go:220] Checking for updates...
	I0930 21:03:42.758261   73900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 21:03:42.759566   73900 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:03:42.760907   73900 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 21:03:42.762342   73900 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 21:03:42.763561   73900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 21:03:42.765356   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:03:42.765773   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.765822   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.780605   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45071
	I0930 21:03:42.781022   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.781550   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.781583   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.781912   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.782160   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.784603   73900 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 21:03:42.785760   73900 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 21:03:42.786115   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.786156   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.800937   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0930 21:03:42.801409   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.801882   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.801905   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.802216   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.802397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.838423   73900 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 21:03:42.839832   73900 start.go:297] selected driver: kvm2
	I0930 21:03:42.839847   73900 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.839953   73900 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 21:03:42.840605   73900 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.840667   73900 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 21:03:42.856119   73900 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 21:03:42.856550   73900 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:03:42.856580   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:03:42.856630   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:03:42.856665   73900 start.go:340] cluster config:
	{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.856778   73900 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.858732   73900 out.go:177] * Starting "old-k8s-version-621406" primary control-plane node in "old-k8s-version-621406" cluster
	I0930 21:03:42.859876   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:03:42.859912   73900 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 21:03:42.859929   73900 cache.go:56] Caching tarball of preloaded images
	I0930 21:03:42.860020   73900 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 21:03:42.860031   73900 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0930 21:03:42.860153   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:03:42.860340   73900 start.go:360] acquireMachinesLock for old-k8s-version-621406: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:03:44.619810   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:47.691872   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:53.771838   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:56.843848   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:02.923822   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:05.995871   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:12.075814   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:15.147854   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:21.227790   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:24.299842   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:30.379801   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:33.451787   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:39.531808   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:42.603838   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:48.683904   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:51.755939   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:57.835834   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:00.907789   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:06.987875   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:10.059892   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:16.139832   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:19.211908   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:25.291812   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:28.363915   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:34.443827   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:37.515928   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:43.595824   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:46.667934   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:52.747851   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:55.819883   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:01.899789   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:04.971946   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:11.051812   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:14.123833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:20.203805   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:23.275875   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:29.355806   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:32.427931   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:38.507837   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:41.579909   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:47.659786   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:50.731827   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:56.811833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:59.883878   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:05.963833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:09.035828   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:12.040058   73375 start.go:364] duration metric: took 4m26.951572628s to acquireMachinesLock for "no-preload-997816"
	I0930 21:07:12.040115   73375 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:12.040126   73375 fix.go:54] fixHost starting: 
	I0930 21:07:12.040448   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:12.040485   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:12.057054   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0930 21:07:12.057624   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:12.058143   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:12.058173   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:12.058523   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:12.058739   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:12.058873   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:12.060479   73375 fix.go:112] recreateIfNeeded on no-preload-997816: state=Stopped err=<nil>
	I0930 21:07:12.060499   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	W0930 21:07:12.060640   73375 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:12.062653   73375 out.go:177] * Restarting existing kvm2 VM for "no-preload-997816" ...
	I0930 21:07:12.037683   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:12.037732   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:07:12.038031   73256 buildroot.go:166] provisioning hostname "embed-certs-256103"
	I0930 21:07:12.038055   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:07:12.038234   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:07:12.039910   73256 machine.go:96] duration metric: took 4m37.42208497s to provisionDockerMachine
	I0930 21:07:12.039954   73256 fix.go:56] duration metric: took 4m37.444804798s for fixHost
	I0930 21:07:12.039962   73256 start.go:83] releasing machines lock for "embed-certs-256103", held for 4m37.444833727s
	W0930 21:07:12.039989   73256 start.go:714] error starting host: provision: host is not running
	W0930 21:07:12.040104   73256 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0930 21:07:12.040116   73256 start.go:729] Will try again in 5 seconds ...
	I0930 21:07:12.063941   73375 main.go:141] libmachine: (no-preload-997816) Calling .Start
	I0930 21:07:12.064167   73375 main.go:141] libmachine: (no-preload-997816) Ensuring networks are active...
	I0930 21:07:12.065080   73375 main.go:141] libmachine: (no-preload-997816) Ensuring network default is active
	I0930 21:07:12.065489   73375 main.go:141] libmachine: (no-preload-997816) Ensuring network mk-no-preload-997816 is active
	I0930 21:07:12.065993   73375 main.go:141] libmachine: (no-preload-997816) Getting domain xml...
	I0930 21:07:12.066923   73375 main.go:141] libmachine: (no-preload-997816) Creating domain...
	I0930 21:07:13.297091   73375 main.go:141] libmachine: (no-preload-997816) Waiting to get IP...
	I0930 21:07:13.297965   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.298386   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.298473   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.298370   74631 retry.go:31] will retry after 312.032565ms: waiting for machine to come up
	I0930 21:07:13.612088   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.612583   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.612607   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.612519   74631 retry.go:31] will retry after 292.985742ms: waiting for machine to come up
	I0930 21:07:13.907355   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.907794   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.907817   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.907754   74631 retry.go:31] will retry after 451.618632ms: waiting for machine to come up
	I0930 21:07:14.361536   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:14.361990   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:14.362054   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:14.361947   74631 retry.go:31] will retry after 599.246635ms: waiting for machine to come up
	I0930 21:07:14.962861   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:14.963341   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:14.963369   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:14.963294   74631 retry.go:31] will retry after 748.726096ms: waiting for machine to come up
	I0930 21:07:17.040758   73256 start.go:360] acquireMachinesLock for embed-certs-256103: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:07:15.713258   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:15.713576   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:15.713601   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:15.713525   74631 retry.go:31] will retry after 907.199669ms: waiting for machine to come up
	I0930 21:07:16.622784   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:16.623275   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:16.623307   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:16.623211   74631 retry.go:31] will retry after 744.978665ms: waiting for machine to come up
	I0930 21:07:17.369735   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:17.370206   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:17.370231   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:17.370154   74631 retry.go:31] will retry after 1.238609703s: waiting for machine to come up
	I0930 21:07:18.610618   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:18.610967   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:18.610989   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:18.610928   74631 retry.go:31] will retry after 1.354775356s: waiting for machine to come up
	I0930 21:07:19.967473   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:19.967892   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:19.967916   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:19.967851   74631 retry.go:31] will retry after 2.26449082s: waiting for machine to come up
	I0930 21:07:22.234066   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:22.234514   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:22.234536   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:22.234474   74631 retry.go:31] will retry after 2.728158374s: waiting for machine to come up
	I0930 21:07:24.966375   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:24.966759   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:24.966782   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:24.966724   74631 retry.go:31] will retry after 3.119117729s: waiting for machine to come up
	I0930 21:07:29.336238   73707 start.go:364] duration metric: took 3m58.92874513s to acquireMachinesLock for "default-k8s-diff-port-291511"
	I0930 21:07:29.336327   73707 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:29.336347   73707 fix.go:54] fixHost starting: 
	I0930 21:07:29.336726   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:29.336779   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:29.354404   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0930 21:07:29.354848   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:29.355331   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:07:29.355352   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:29.355882   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:29.356081   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:29.356249   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:07:29.358109   73707 fix.go:112] recreateIfNeeded on default-k8s-diff-port-291511: state=Stopped err=<nil>
	I0930 21:07:29.358155   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	W0930 21:07:29.358336   73707 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:29.361072   73707 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-291511" ...
	I0930 21:07:28.087153   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.087604   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has current primary IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.087636   73375 main.go:141] libmachine: (no-preload-997816) Found IP for machine: 192.168.61.93
	I0930 21:07:28.087644   73375 main.go:141] libmachine: (no-preload-997816) Reserving static IP address...
	I0930 21:07:28.088047   73375 main.go:141] libmachine: (no-preload-997816) Reserved static IP address: 192.168.61.93
	I0930 21:07:28.088068   73375 main.go:141] libmachine: (no-preload-997816) Waiting for SSH to be available...
	I0930 21:07:28.088090   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "no-preload-997816", mac: "52:54:00:cb:3d:73", ip: "192.168.61.93"} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.088158   73375 main.go:141] libmachine: (no-preload-997816) DBG | skip adding static IP to network mk-no-preload-997816 - found existing host DHCP lease matching {name: "no-preload-997816", mac: "52:54:00:cb:3d:73", ip: "192.168.61.93"}
	I0930 21:07:28.088181   73375 main.go:141] libmachine: (no-preload-997816) DBG | Getting to WaitForSSH function...
	I0930 21:07:28.090195   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.090522   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.090547   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.090722   73375 main.go:141] libmachine: (no-preload-997816) DBG | Using SSH client type: external
	I0930 21:07:28.090739   73375 main.go:141] libmachine: (no-preload-997816) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa (-rw-------)
	I0930 21:07:28.090767   73375 main.go:141] libmachine: (no-preload-997816) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.93 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:07:28.090787   73375 main.go:141] libmachine: (no-preload-997816) DBG | About to run SSH command:
	I0930 21:07:28.090801   73375 main.go:141] libmachine: (no-preload-997816) DBG | exit 0
	I0930 21:07:28.211669   73375 main.go:141] libmachine: (no-preload-997816) DBG | SSH cmd err, output: <nil>: 
	I0930 21:07:28.212073   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetConfigRaw
	I0930 21:07:28.212714   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:28.215442   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.215934   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.215951   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.216186   73375 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/config.json ...
	I0930 21:07:28.216370   73375 machine.go:93] provisionDockerMachine start ...
	I0930 21:07:28.216386   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:28.216575   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.218963   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.219423   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.219455   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.219604   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.219770   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.219948   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.220057   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.220252   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.220441   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.220452   73375 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:07:28.315814   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:07:28.315853   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.316131   73375 buildroot.go:166] provisioning hostname "no-preload-997816"
	I0930 21:07:28.316161   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.316372   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.319253   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.319506   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.319548   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.319711   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.319903   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.320057   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.320182   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.320383   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.320592   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.320606   73375 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-997816 && echo "no-preload-997816" | sudo tee /etc/hostname
	I0930 21:07:28.433652   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-997816
	
	I0930 21:07:28.433686   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.436989   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.437350   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.437389   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.437611   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.437784   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.437957   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.438075   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.438267   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.438487   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.438512   73375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-997816' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-997816/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-997816' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:07:28.544056   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:28.544088   73375 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:07:28.544112   73375 buildroot.go:174] setting up certificates
	I0930 21:07:28.544122   73375 provision.go:84] configureAuth start
	I0930 21:07:28.544135   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.544418   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:28.546960   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.547363   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.547384   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.547570   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.549918   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.550325   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.550353   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.550535   73375 provision.go:143] copyHostCerts
	I0930 21:07:28.550612   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:07:28.550627   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:07:28.550711   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:07:28.550804   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:07:28.550812   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:07:28.550837   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:07:28.550893   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:07:28.550900   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:07:28.550920   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:07:28.550967   73375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.no-preload-997816 san=[127.0.0.1 192.168.61.93 localhost minikube no-preload-997816]
	I0930 21:07:28.744306   73375 provision.go:177] copyRemoteCerts
	I0930 21:07:28.744364   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:07:28.744386   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.747024   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.747368   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.747401   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.747615   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.747813   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.747973   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.748133   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:28.825616   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0930 21:07:28.849513   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:07:28.872666   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:07:28.895673   73375 provision.go:87] duration metric: took 351.536833ms to configureAuth
	I0930 21:07:28.895708   73375 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:07:28.895896   73375 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:28.895975   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.898667   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.899067   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.899098   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.899324   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.899567   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.899703   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.899829   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.899946   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.900120   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.900134   73375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:07:29.113855   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:07:29.113877   73375 machine.go:96] duration metric: took 897.495238ms to provisionDockerMachine
	I0930 21:07:29.113887   73375 start.go:293] postStartSetup for "no-preload-997816" (driver="kvm2")
	I0930 21:07:29.113897   73375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:07:29.113921   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.114220   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:07:29.114254   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.117274   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.117619   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.117663   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.117816   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.118010   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.118159   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.118289   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.197962   73375 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:07:29.202135   73375 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:07:29.202166   73375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:07:29.202237   73375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:07:29.202321   73375 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:07:29.202406   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:07:29.211693   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:29.234503   73375 start.go:296] duration metric: took 120.601484ms for postStartSetup
	I0930 21:07:29.234582   73375 fix.go:56] duration metric: took 17.194433455s for fixHost
	I0930 21:07:29.234610   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.237134   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.237544   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.237574   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.237728   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.237912   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.238085   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.238199   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.238348   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:29.238506   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:29.238515   73375 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:07:29.336092   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730449.310327649
	
	I0930 21:07:29.336114   73375 fix.go:216] guest clock: 1727730449.310327649
	I0930 21:07:29.336123   73375 fix.go:229] Guest: 2024-09-30 21:07:29.310327649 +0000 UTC Remote: 2024-09-30 21:07:29.234588814 +0000 UTC m=+284.288095935 (delta=75.738835ms)
	I0930 21:07:29.336147   73375 fix.go:200] guest clock delta is within tolerance: 75.738835ms
	I0930 21:07:29.336153   73375 start.go:83] releasing machines lock for "no-preload-997816", held for 17.296055752s
	I0930 21:07:29.336194   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.336478   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:29.339488   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.339864   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.339909   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.340070   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340525   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340697   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340800   73375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:07:29.340836   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.340930   73375 ssh_runner.go:195] Run: cat /version.json
	I0930 21:07:29.340955   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.343579   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.343941   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.343976   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344010   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344228   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.344405   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.344441   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.344471   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344543   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.344616   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.344689   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.344784   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.344966   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.345105   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.420949   73375 ssh_runner.go:195] Run: systemctl --version
	I0930 21:07:29.465854   73375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:07:29.616360   73375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:07:29.624522   73375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:07:29.624604   73375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:07:29.642176   73375 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:07:29.642202   73375 start.go:495] detecting cgroup driver to use...
	I0930 21:07:29.642279   73375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:07:29.657878   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:07:29.674555   73375 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:07:29.674614   73375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:07:29.690953   73375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:07:29.705425   73375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:07:29.814602   73375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:07:29.957009   73375 docker.go:233] disabling docker service ...
	I0930 21:07:29.957091   73375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:07:29.971419   73375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:07:29.362775   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Start
	I0930 21:07:29.363023   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring networks are active...
	I0930 21:07:29.364071   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring network default is active
	I0930 21:07:29.364456   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring network mk-default-k8s-diff-port-291511 is active
	I0930 21:07:29.364940   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Getting domain xml...
	I0930 21:07:29.365759   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Creating domain...
	I0930 21:07:29.987509   73375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:07:30.112952   73375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:07:30.239945   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:07:30.253298   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:07:30.271687   73375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:07:30.271768   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.282267   73375 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:07:30.282339   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.292776   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.303893   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.315002   73375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:07:30.326410   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.336951   73375 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.356016   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.367847   73375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:07:30.378650   73375 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:07:30.378703   73375 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:07:30.391768   73375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:07:30.401887   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:30.534771   73375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:07:30.622017   73375 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:07:30.622087   73375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:07:30.627221   73375 start.go:563] Will wait 60s for crictl version
	I0930 21:07:30.627294   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:30.633071   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:07:30.675743   73375 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:07:30.675830   73375 ssh_runner.go:195] Run: crio --version
	I0930 21:07:30.703470   73375 ssh_runner.go:195] Run: crio --version
	I0930 21:07:30.732424   73375 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:07:30.733714   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:30.737016   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:30.737380   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:30.737421   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:30.737690   73375 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0930 21:07:30.741714   73375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:30.754767   73375 kubeadm.go:883] updating cluster {Name:no-preload-997816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:07:30.754892   73375 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:07:30.754941   73375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:30.794489   73375 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:07:30.794516   73375 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 21:07:30.794605   73375 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:30.794624   73375 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:30.794653   73375 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:30.794694   73375 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:30.794733   73375 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:30.794691   73375 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:30.794822   73375 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:30.794836   73375 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0930 21:07:30.796508   73375 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:30.796521   73375 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:30.796538   73375 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:30.796543   73375 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:30.796610   73375 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:30.796616   73375 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:30.796611   73375 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0930 21:07:30.796665   73375 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.018683   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0930 21:07:31.028097   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.117252   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.131998   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.136871   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.140418   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.170883   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.171059   73375 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0930 21:07:31.171098   73375 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.171142   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.172908   73375 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0930 21:07:31.172951   73375 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.172994   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.242489   73375 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0930 21:07:31.242541   73375 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.242609   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.246685   73375 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0930 21:07:31.246731   73375 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.246758   73375 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0930 21:07:31.246778   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.246794   73375 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.246837   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.270923   73375 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0930 21:07:31.270971   73375 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.271024   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.271030   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.271100   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.271109   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.271207   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.271269   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.387993   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.388011   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.388044   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.388091   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.388150   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.388230   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.523098   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.523156   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.523300   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.523344   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.523467   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.623696   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0930 21:07:31.623759   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.623778   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0930 21:07:31.623794   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:31.623869   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:31.632927   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0930 21:07:31.633014   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.633117   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.633206   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0930 21:07:31.633269   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:31.648925   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0930 21:07:31.648945   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.648983   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.676886   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0930 21:07:31.676925   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0930 21:07:31.709210   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0930 21:07:31.709287   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0930 21:07:31.709331   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:31.709394   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:31.709330   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0930 21:07:32.112418   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:33.634620   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.985614953s)
	I0930 21:07:33.634656   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0930 21:07:33.634702   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.925342294s)
	I0930 21:07:33.634716   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:33.634731   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0930 21:07:33.634771   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.925359685s)
	I0930 21:07:33.634779   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:33.634782   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0930 21:07:33.634853   73375 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.522405881s)
	I0930 21:07:33.634891   73375 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0930 21:07:33.634913   73375 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:33.634961   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:30.643828   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting to get IP...
	I0930 21:07:30.644936   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.645382   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.645484   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:30.645381   74769 retry.go:31] will retry after 216.832119ms: waiting for machine to come up
	I0930 21:07:30.863953   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.864583   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.864614   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:30.864518   74769 retry.go:31] will retry after 280.448443ms: waiting for machine to come up
	I0930 21:07:31.147184   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.147792   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.147826   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.147728   74769 retry.go:31] will retry after 345.517763ms: waiting for machine to come up
	I0930 21:07:31.495391   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.495819   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.495841   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.495786   74769 retry.go:31] will retry after 457.679924ms: waiting for machine to come up
	I0930 21:07:31.955479   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.955943   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.955974   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.955897   74769 retry.go:31] will retry after 562.95605ms: waiting for machine to come up
	I0930 21:07:32.520890   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:32.521339   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:32.521368   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:32.521285   74769 retry.go:31] will retry after 743.560182ms: waiting for machine to come up
	I0930 21:07:33.266407   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:33.266914   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:33.266941   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:33.266853   74769 retry.go:31] will retry after 947.444427ms: waiting for machine to come up
	I0930 21:07:34.216195   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:34.216705   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:34.216731   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:34.216659   74769 retry.go:31] will retry after 1.186059526s: waiting for machine to come up
	I0930 21:07:35.714633   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.079826486s)
	I0930 21:07:35.714667   73375 ssh_runner.go:235] Completed: which crictl: (2.079690884s)
	I0930 21:07:35.714721   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:35.714670   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0930 21:07:35.714786   73375 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:35.714821   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:35.753242   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:39.088354   73375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.335055656s)
	I0930 21:07:39.088395   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.373547177s)
	I0930 21:07:39.088422   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0930 21:07:39.088458   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:39.088536   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:39.088459   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:35.404773   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:35.405334   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:35.405359   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:35.405225   74769 retry.go:31] will retry after 1.575803783s: waiting for machine to come up
	I0930 21:07:36.983196   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:36.983730   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:36.983759   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:36.983677   74769 retry.go:31] will retry after 2.020561586s: waiting for machine to come up
	I0930 21:07:39.006915   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:39.007304   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:39.007334   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:39.007269   74769 retry.go:31] will retry after 2.801421878s: waiting for machine to come up
	I0930 21:07:41.074012   73375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.985398095s)
	I0930 21:07:41.074061   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0930 21:07:41.074154   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.985588774s)
	I0930 21:07:41.074183   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0930 21:07:41.074202   73375 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:41.074244   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:41.074166   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:42.972016   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.897745882s)
	I0930 21:07:42.972055   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0930 21:07:42.972083   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.8977868s)
	I0930 21:07:42.972110   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0930 21:07:42.972086   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:42.972155   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:44.835190   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.863005436s)
	I0930 21:07:44.835237   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0930 21:07:44.835263   73375 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:44.835334   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:41.810719   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:41.811099   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:41.811117   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:41.811050   74769 retry.go:31] will retry after 2.703489988s: waiting for machine to come up
	I0930 21:07:44.515949   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:44.516329   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:44.516356   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:44.516276   74769 retry.go:31] will retry after 4.001267434s: waiting for machine to come up
	I0930 21:07:49.889033   73900 start.go:364] duration metric: took 4m7.028659379s to acquireMachinesLock for "old-k8s-version-621406"
	I0930 21:07:49.889104   73900 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:49.889111   73900 fix.go:54] fixHost starting: 
	I0930 21:07:49.889542   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:49.889600   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:49.906767   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0930 21:07:49.907283   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:49.907856   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:07:49.907889   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:49.908203   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:49.908397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:07:49.908542   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetState
	I0930 21:07:49.910270   73900 fix.go:112] recreateIfNeeded on old-k8s-version-621406: state=Stopped err=<nil>
	I0930 21:07:49.910306   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	W0930 21:07:49.910441   73900 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:49.912646   73900 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-621406" ...
	I0930 21:07:45.483728   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0930 21:07:45.483778   73375 cache_images.go:123] Successfully loaded all cached images
	I0930 21:07:45.483785   73375 cache_images.go:92] duration metric: took 14.689240439s to LoadCachedImages
	I0930 21:07:45.483799   73375 kubeadm.go:934] updating node { 192.168.61.93 8443 v1.31.1 crio true true} ...
	I0930 21:07:45.483898   73375 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-997816 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:07:45.483977   73375 ssh_runner.go:195] Run: crio config
	I0930 21:07:45.529537   73375 cni.go:84] Creating CNI manager for ""
	I0930 21:07:45.529558   73375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:45.529567   73375 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:07:45.529591   73375 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.93 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-997816 NodeName:no-preload-997816 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:07:45.529713   73375 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-997816"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.93
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.93"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:07:45.529775   73375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:07:45.540251   73375 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:07:45.540323   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:07:45.549622   73375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0930 21:07:45.565425   73375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:07:45.580646   73375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0930 21:07:45.596216   73375 ssh_runner.go:195] Run: grep 192.168.61.93	control-plane.minikube.internal$ /etc/hosts
	I0930 21:07:45.604940   73375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.93	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:45.620809   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:45.751327   73375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:45.768664   73375 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816 for IP: 192.168.61.93
	I0930 21:07:45.768687   73375 certs.go:194] generating shared ca certs ...
	I0930 21:07:45.768702   73375 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:45.768896   73375 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:07:45.768953   73375 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:07:45.768967   73375 certs.go:256] generating profile certs ...
	I0930 21:07:45.769081   73375 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/client.key
	I0930 21:07:45.769188   73375 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.key.c7192a03
	I0930 21:07:45.769251   73375 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.key
	I0930 21:07:45.769422   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:07:45.769468   73375 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:07:45.769483   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:07:45.769527   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:07:45.769569   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:07:45.769603   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:07:45.769672   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:45.770679   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:07:45.809391   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:07:45.837624   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:07:45.878472   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:07:45.909163   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 21:07:45.950655   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 21:07:45.974391   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:07:45.997258   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 21:07:46.019976   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:07:46.042828   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:07:46.066625   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:07:46.089639   73375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:07:46.106202   73375 ssh_runner.go:195] Run: openssl version
	I0930 21:07:46.111810   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:07:46.122379   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.126659   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.126699   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.132363   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:07:46.143074   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:07:46.154060   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.158542   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.158602   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.164210   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:07:46.175160   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:07:46.186326   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.190782   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.190856   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.196356   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:07:46.206957   73375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:07:46.211650   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:07:46.217398   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:07:46.223566   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:07:46.230204   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:07:46.236404   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:07:46.242282   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:07:46.248591   73375 kubeadm.go:392] StartCluster: {Name:no-preload-997816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:07:46.248686   73375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:07:46.248731   73375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:46.292355   73375 cri.go:89] found id: ""
	I0930 21:07:46.292435   73375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:07:46.303578   73375 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:07:46.303598   73375 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:07:46.303668   73375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:07:46.314544   73375 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:07:46.315643   73375 kubeconfig.go:125] found "no-preload-997816" server: "https://192.168.61.93:8443"
	I0930 21:07:46.318243   73375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:07:46.329751   73375 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.93
	I0930 21:07:46.329781   73375 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:07:46.329791   73375 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:07:46.329837   73375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:46.364302   73375 cri.go:89] found id: ""
	I0930 21:07:46.364392   73375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:07:46.384616   73375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:07:46.395855   73375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:07:46.395875   73375 kubeadm.go:157] found existing configuration files:
	
	I0930 21:07:46.395915   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:07:46.405860   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:07:46.405918   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:07:46.416618   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:07:46.426654   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:07:46.426712   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:07:46.435880   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:07:46.446273   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:07:46.446346   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:07:46.457099   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:07:46.467322   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:07:46.467386   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:07:46.477809   73375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:07:46.489024   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:46.605127   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.509287   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.708716   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.780830   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.883843   73375 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:07:47.883940   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.384688   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.884008   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.925804   73375 api_server.go:72] duration metric: took 1.041960261s to wait for apiserver process to appear ...
	I0930 21:07:48.925833   73375 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:07:48.925857   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:48.521282   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.521838   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Found IP for machine: 192.168.50.2
	I0930 21:07:48.521864   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Reserving static IP address...
	I0930 21:07:48.521876   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has current primary IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.522306   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Reserved static IP address: 192.168.50.2
	I0930 21:07:48.522349   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-291511", mac: "52:54:00:27:46:45", ip: "192.168.50.2"} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.522361   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for SSH to be available...
	I0930 21:07:48.522401   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | skip adding static IP to network mk-default-k8s-diff-port-291511 - found existing host DHCP lease matching {name: "default-k8s-diff-port-291511", mac: "52:54:00:27:46:45", ip: "192.168.50.2"}
	I0930 21:07:48.522427   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Getting to WaitForSSH function...
	I0930 21:07:48.525211   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.525641   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.525667   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.525827   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Using SSH client type: external
	I0930 21:07:48.525854   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa (-rw-------)
	I0930 21:07:48.525883   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:07:48.525900   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | About to run SSH command:
	I0930 21:07:48.525913   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | exit 0
	I0930 21:07:48.655656   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | SSH cmd err, output: <nil>: 
	I0930 21:07:48.656045   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetConfigRaw
	I0930 21:07:48.656789   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:48.659902   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.660358   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.660395   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.660586   73707 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/config.json ...
	I0930 21:07:48.660842   73707 machine.go:93] provisionDockerMachine start ...
	I0930 21:07:48.660866   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:48.661063   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.663782   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.664138   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.664165   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.664318   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.664567   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.664733   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.664868   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.665036   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.665283   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.665315   73707 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:07:48.776382   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:07:48.776414   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:48.776676   73707 buildroot.go:166] provisioning hostname "default-k8s-diff-port-291511"
	I0930 21:07:48.776711   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:48.776913   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.779952   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.780470   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.780516   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.780594   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.780773   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.780925   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.781080   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.781253   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.781457   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.781473   73707 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-291511 && echo "default-k8s-diff-port-291511" | sudo tee /etc/hostname
	I0930 21:07:48.913633   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-291511
	
	I0930 21:07:48.913724   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.916869   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.917280   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.917319   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.917501   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.917715   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.917882   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.918117   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.918296   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.918533   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.918562   73707 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-291511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-291511/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-291511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:07:49.048106   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:49.048141   73707 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:07:49.048182   73707 buildroot.go:174] setting up certificates
	I0930 21:07:49.048198   73707 provision.go:84] configureAuth start
	I0930 21:07:49.048212   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:49.048498   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:49.051299   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.051665   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.051702   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.051837   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.054211   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.054512   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.054540   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.054691   73707 provision.go:143] copyHostCerts
	I0930 21:07:49.054774   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:07:49.054789   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:07:49.054866   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:07:49.054982   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:07:49.054994   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:07:49.055021   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:07:49.055097   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:07:49.055106   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:07:49.055130   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:07:49.055189   73707 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-291511 san=[127.0.0.1 192.168.50.2 default-k8s-diff-port-291511 localhost minikube]
	I0930 21:07:49.239713   73707 provision.go:177] copyRemoteCerts
	I0930 21:07:49.239771   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:07:49.239796   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.242146   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.242468   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.242500   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.242663   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.242834   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.242982   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.243200   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.329405   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:07:49.358036   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0930 21:07:49.385742   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:07:49.409436   73707 provision.go:87] duration metric: took 361.22398ms to configureAuth
	I0930 21:07:49.409493   73707 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:07:49.409696   73707 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:49.409798   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.412572   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.412935   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.412975   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.413266   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.413476   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.413680   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.413821   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.414009   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:49.414199   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:49.414223   73707 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:07:49.635490   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:07:49.635553   73707 machine.go:96] duration metric: took 974.696002ms to provisionDockerMachine
	I0930 21:07:49.635567   73707 start.go:293] postStartSetup for "default-k8s-diff-port-291511" (driver="kvm2")
	I0930 21:07:49.635580   73707 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:07:49.635603   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.635954   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:07:49.635989   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.638867   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.639304   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.639340   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.639413   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.639631   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.639837   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.639995   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.728224   73707 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:07:49.732558   73707 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:07:49.732590   73707 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:07:49.732679   73707 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:07:49.732769   73707 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:07:49.732869   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:07:49.742783   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:49.766585   73707 start.go:296] duration metric: took 131.002562ms for postStartSetup
	I0930 21:07:49.766629   73707 fix.go:56] duration metric: took 20.430290493s for fixHost
	I0930 21:07:49.766652   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.769724   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.770143   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.770172   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.770461   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.770708   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.770872   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.771099   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.771240   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:49.771616   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:49.771636   73707 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:07:49.888863   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730469.865719956
	
	I0930 21:07:49.888889   73707 fix.go:216] guest clock: 1727730469.865719956
	I0930 21:07:49.888900   73707 fix.go:229] Guest: 2024-09-30 21:07:49.865719956 +0000 UTC Remote: 2024-09-30 21:07:49.76663417 +0000 UTC m=+259.507652750 (delta=99.085786ms)
	I0930 21:07:49.888943   73707 fix.go:200] guest clock delta is within tolerance: 99.085786ms
	I0930 21:07:49.888950   73707 start.go:83] releasing machines lock for "default-k8s-diff-port-291511", held for 20.552679126s
	I0930 21:07:49.888982   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.889242   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:49.892424   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.892817   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.892854   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.893030   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893601   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893780   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893852   73707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:07:49.893932   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.893934   73707 ssh_runner.go:195] Run: cat /version.json
	I0930 21:07:49.893985   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.896733   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.896843   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897130   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.897179   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897216   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.897233   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897471   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.897478   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.897679   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.897686   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.897825   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.897834   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.897954   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.898097   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:50.022951   73707 ssh_runner.go:195] Run: systemctl --version
	I0930 21:07:50.029177   73707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:07:50.186430   73707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:07:50.193205   73707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:07:50.193277   73707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:07:50.211330   73707 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:07:50.211365   73707 start.go:495] detecting cgroup driver to use...
	I0930 21:07:50.211430   73707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:07:50.227255   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:07:50.241404   73707 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:07:50.241468   73707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:07:50.257879   73707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:07:50.274595   73707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:07:50.394354   73707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:07:50.567503   73707 docker.go:233] disabling docker service ...
	I0930 21:07:50.567582   73707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:07:50.584390   73707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:07:50.600920   73707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:07:50.742682   73707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:07:50.882835   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:07:50.898340   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:07:50.919395   73707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:07:50.919464   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.930773   73707 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:07:50.930846   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.941870   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.952633   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.964281   73707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:07:50.977410   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.988423   73707 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:51.016091   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:51.027473   73707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:07:51.037470   73707 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:07:51.037537   73707 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:07:51.056841   73707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:07:51.068163   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:51.205357   73707 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:07:51.305327   73707 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:07:51.305410   73707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:07:51.311384   73707 start.go:563] Will wait 60s for crictl version
	I0930 21:07:51.311448   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:07:51.315965   73707 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:07:51.369329   73707 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:07:51.369417   73707 ssh_runner.go:195] Run: crio --version
	I0930 21:07:51.399897   73707 ssh_runner.go:195] Run: crio --version
	I0930 21:07:51.431075   73707 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:07:49.914747   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .Start
	I0930 21:07:49.914948   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring networks are active...
	I0930 21:07:49.915796   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network default is active
	I0930 21:07:49.916225   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network mk-old-k8s-version-621406 is active
	I0930 21:07:49.916890   73900 main.go:141] libmachine: (old-k8s-version-621406) Getting domain xml...
	I0930 21:07:49.917688   73900 main.go:141] libmachine: (old-k8s-version-621406) Creating domain...
	I0930 21:07:51.277867   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting to get IP...
	I0930 21:07:51.279001   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.279451   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.279552   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.279437   74917 retry.go:31] will retry after 307.582619ms: waiting for machine to come up
	I0930 21:07:51.589030   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.589414   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.589445   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.589368   74917 retry.go:31] will retry after 370.683214ms: waiting for machine to come up
	I0930 21:07:51.961914   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.962474   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.962511   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.962415   74917 retry.go:31] will retry after 428.703419ms: waiting for machine to come up
	I0930 21:07:52.393154   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.393682   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.393750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.393673   74917 retry.go:31] will retry after 514.254023ms: waiting for machine to come up
	I0930 21:07:52.334804   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:07:52.334846   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:07:52.334863   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.377601   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:07:52.377632   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:07:52.426784   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.473771   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:52.473811   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:52.926391   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.945122   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:52.945154   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:53.426295   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:53.434429   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:53.434464   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:53.926642   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:53.931501   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0930 21:07:53.940069   73375 api_server.go:141] control plane version: v1.31.1
	I0930 21:07:53.940104   73375 api_server.go:131] duration metric: took 5.014262318s to wait for apiserver health ...
	I0930 21:07:53.940115   73375 cni.go:84] Creating CNI manager for ""
	I0930 21:07:53.940123   73375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:53.941879   73375 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:07:53.943335   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:07:53.959585   73375 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:07:53.996310   73375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:07:54.010070   73375 system_pods.go:59] 8 kube-system pods found
	I0930 21:07:54.010129   73375 system_pods.go:61] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:07:54.010142   73375 system_pods.go:61] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:07:54.010157   73375 system_pods.go:61] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:07:54.010174   73375 system_pods.go:61] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:07:54.010186   73375 system_pods.go:61] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:07:54.010200   73375 system_pods.go:61] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:07:54.010212   73375 system_pods.go:61] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:07:54.010223   73375 system_pods.go:61] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:07:54.010232   73375 system_pods.go:74] duration metric: took 13.897885ms to wait for pod list to return data ...
	I0930 21:07:54.010244   73375 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:07:54.019651   73375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:07:54.019683   73375 node_conditions.go:123] node cpu capacity is 2
	I0930 21:07:54.019697   73375 node_conditions.go:105] duration metric: took 9.446744ms to run NodePressure ...
	I0930 21:07:54.019719   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:54.314348   73375 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:07:54.319583   73375 kubeadm.go:739] kubelet initialised
	I0930 21:07:54.319613   73375 kubeadm.go:740] duration metric: took 5.232567ms waiting for restarted kubelet to initialise ...
	I0930 21:07:54.319625   73375 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:07:54.326866   73375 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.333592   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.333628   73375 pod_ready.go:82] duration metric: took 6.72431ms for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.333640   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.333651   73375 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.340155   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "etcd-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.340194   73375 pod_ready.go:82] duration metric: took 6.533127ms for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.340208   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "etcd-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.340216   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.346494   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-apiserver-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.346530   73375 pod_ready.go:82] duration metric: took 6.304143ms for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.346542   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-apiserver-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.346551   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.403699   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.403731   73375 pod_ready.go:82] duration metric: took 57.168471ms for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.403743   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.403752   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.800372   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-proxy-klcv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.800410   73375 pod_ready.go:82] duration metric: took 396.646883ms for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.800423   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-proxy-klcv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.800432   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:51.432761   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:51.436278   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:51.436659   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:51.436700   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:51.436931   73707 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0930 21:07:51.441356   73707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:51.454358   73707 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-291511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:07:51.454484   73707 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:07:51.454547   73707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:51.502072   73707 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:07:51.502143   73707 ssh_runner.go:195] Run: which lz4
	I0930 21:07:51.506458   73707 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:07:51.510723   73707 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:07:51.510756   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 21:07:52.792488   73707 crio.go:462] duration metric: took 1.286075452s to copy over tarball
	I0930 21:07:52.792580   73707 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:07:55.207282   73707 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.414661305s)
	I0930 21:07:55.207314   73707 crio.go:469] duration metric: took 2.414793514s to extract the tarball
	I0930 21:07:55.207321   73707 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:07:55.244001   73707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:55.287097   73707 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 21:07:55.287124   73707 cache_images.go:84] Images are preloaded, skipping loading
	I0930 21:07:55.287133   73707 kubeadm.go:934] updating node { 192.168.50.2 8444 v1.31.1 crio true true} ...
	I0930 21:07:55.287277   73707 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-291511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:07:55.287384   73707 ssh_runner.go:195] Run: crio config
	I0930 21:07:55.200512   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-scheduler-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.200559   73375 pod_ready.go:82] duration metric: took 400.11341ms for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:55.200569   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-scheduler-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.200577   73375 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:55.601008   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.601042   73375 pod_ready.go:82] duration metric: took 400.453601ms for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:55.601055   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.601065   73375 pod_ready.go:39] duration metric: took 1.281429189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:07:55.601086   73375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:07:55.617767   73375 ops.go:34] apiserver oom_adj: -16
	I0930 21:07:55.617791   73375 kubeadm.go:597] duration metric: took 9.314187459s to restartPrimaryControlPlane
	I0930 21:07:55.617803   73375 kubeadm.go:394] duration metric: took 9.369220314s to StartCluster
	I0930 21:07:55.617824   73375 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.617913   73375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:07:55.619455   73375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.619760   73375 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:07:55.619842   73375 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:07:55.619959   73375 addons.go:69] Setting storage-provisioner=true in profile "no-preload-997816"
	I0930 21:07:55.619984   73375 addons.go:234] Setting addon storage-provisioner=true in "no-preload-997816"
	I0930 21:07:55.619974   73375 addons.go:69] Setting default-storageclass=true in profile "no-preload-997816"
	I0930 21:07:55.620003   73375 addons.go:69] Setting metrics-server=true in profile "no-preload-997816"
	I0930 21:07:55.620009   73375 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-997816"
	I0930 21:07:55.620020   73375 addons.go:234] Setting addon metrics-server=true in "no-preload-997816"
	W0930 21:07:55.620031   73375 addons.go:243] addon metrics-server should already be in state true
	I0930 21:07:55.620050   73375 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:55.620061   73375 host.go:66] Checking if "no-preload-997816" exists ...
	W0930 21:07:55.619994   73375 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:07:55.620124   73375 host.go:66] Checking if "no-preload-997816" exists ...
	I0930 21:07:55.620420   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620459   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.620494   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620535   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.620593   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620634   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.621682   73375 out.go:177] * Verifying Kubernetes components...
	I0930 21:07:55.623102   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:55.643690   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0930 21:07:55.643895   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35545
	I0930 21:07:55.644411   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.644553   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.644968   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.644981   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.645072   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.645078   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.645314   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.645502   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.645732   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.645777   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.645812   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.649244   73375 addons.go:234] Setting addon default-storageclass=true in "no-preload-997816"
	W0930 21:07:55.649262   73375 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:07:55.649283   73375 host.go:66] Checking if "no-preload-997816" exists ...
	I0930 21:07:55.649524   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.649548   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.671077   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42635
	I0930 21:07:55.671558   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.672193   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.672212   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.672505   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45163
	I0930 21:07:55.672736   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.672808   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I0930 21:07:55.673354   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.673396   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.673920   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.673926   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.674528   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.674545   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.674974   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.675624   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.675658   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.676078   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.676095   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.676547   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.676724   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.679115   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.681410   73375 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:55.688953   73375 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:07:55.688981   73375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:07:55.689015   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.693338   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.693996   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.694023   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.694212   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.694344   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.694444   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.694545   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.696037   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0930 21:07:55.696535   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.697185   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.697207   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.697567   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.697772   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.699797   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.700998   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0930 21:07:55.701429   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.702094   73375 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:07:52.909622   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.910169   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.910202   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.910132   74917 retry.go:31] will retry after 605.019848ms: waiting for machine to come up
	I0930 21:07:53.517276   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:53.517911   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:53.517943   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:53.517858   74917 retry.go:31] will retry after 856.018614ms: waiting for machine to come up
	I0930 21:07:54.376343   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:54.376838   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:54.376862   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:54.376794   74917 retry.go:31] will retry after 740.749778ms: waiting for machine to come up
	I0930 21:07:55.119090   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:55.119631   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:55.119660   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:55.119583   74917 retry.go:31] will retry after 1.444139076s: waiting for machine to come up
	I0930 21:07:56.566261   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:56.566744   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:56.566771   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:56.566695   74917 retry.go:31] will retry after 1.681362023s: waiting for machine to come up
	I0930 21:07:55.703687   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:07:55.703709   73375 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:07:55.703736   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.703788   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.703816   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.704295   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.704553   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.707029   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.707365   73375 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:07:55.707385   73375 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:07:55.707408   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.708091   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.708606   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.708629   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.709024   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.709237   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.709388   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.709573   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.711123   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.711607   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.711631   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.711987   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.712178   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.712318   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.712469   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.888447   73375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:55.912060   73375 node_ready.go:35] waiting up to 6m0s for node "no-preload-997816" to be "Ready" ...
	I0930 21:07:56.010903   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:07:56.012576   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:07:56.012601   73375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:07:56.038592   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:07:56.055481   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:07:56.055513   73375 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:07:56.131820   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:07:56.131844   73375 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:07:56.213605   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:07:57.078385   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.067447636s)
	I0930 21:07:57.078439   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.078451   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.078770   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.078823   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:57.078836   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.078845   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.078793   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:57.079118   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:57.079149   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.079157   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:57.672706   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.672737   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.673053   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.673072   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:58.301165   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:59.072488   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.858837368s)
	I0930 21:07:59.072565   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.072582   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.072921   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.072986   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073029   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073038   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.073221   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.034599023s)
	I0930 21:07:59.073271   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073344   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.073383   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073397   73375 addons.go:475] Verifying addon metrics-server=true in "no-preload-997816"
	I0930 21:07:59.073347   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.073754   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:59.073804   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.073819   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073834   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073846   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.075323   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:59.075329   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.075353   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.077687   73375 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0930 21:07:59.079278   73375 addons.go:510] duration metric: took 3.459453938s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0930 21:07:55.346656   73707 cni.go:84] Creating CNI manager for ""
	I0930 21:07:55.346679   73707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:55.346688   73707 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:07:55.346718   73707 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-291511 NodeName:default-k8s-diff-port-291511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:07:55.346847   73707 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-291511"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:07:55.346903   73707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:07:55.356645   73707 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:07:55.356708   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:07:55.366457   73707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0930 21:07:55.384639   73707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:07:55.403208   73707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0930 21:07:55.421878   73707 ssh_runner.go:195] Run: grep 192.168.50.2	control-plane.minikube.internal$ /etc/hosts
	I0930 21:07:55.425803   73707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:55.439370   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:55.553575   73707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:55.570754   73707 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511 for IP: 192.168.50.2
	I0930 21:07:55.570787   73707 certs.go:194] generating shared ca certs ...
	I0930 21:07:55.570808   73707 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.571011   73707 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:07:55.571067   73707 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:07:55.571083   73707 certs.go:256] generating profile certs ...
	I0930 21:07:55.571178   73707 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/client.key
	I0930 21:07:55.571270   73707 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.key.2e3224d9
	I0930 21:07:55.571326   73707 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.key
	I0930 21:07:55.571464   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:07:55.571510   73707 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:07:55.571522   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:07:55.571587   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:07:55.571627   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:07:55.571655   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:07:55.571719   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:55.572367   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:07:55.606278   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:07:55.645629   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:07:55.690514   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:07:55.737445   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 21:07:55.773656   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 21:07:55.804015   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:07:55.830210   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:07:55.857601   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:07:55.887765   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:07:55.922053   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:07:55.951040   73707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:07:55.969579   73707 ssh_runner.go:195] Run: openssl version
	I0930 21:07:55.975576   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:07:55.987255   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:07:55.993657   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:07:55.993723   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:07:56.001878   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:07:56.017528   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:07:56.030398   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.035552   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.035625   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.043878   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:07:56.055384   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:07:56.066808   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.073099   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.073164   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.081343   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:07:56.096669   73707 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:07:56.102635   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:07:56.110805   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:07:56.118533   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:07:56.125800   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:07:56.133985   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:07:56.142109   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:07:56.150433   73707 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-291511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:07:56.150538   73707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:07:56.150608   73707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:56.197936   73707 cri.go:89] found id: ""
	I0930 21:07:56.198016   73707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:07:56.208133   73707 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:07:56.208155   73707 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:07:56.208204   73707 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:07:56.218880   73707 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:07:56.220322   73707 kubeconfig.go:125] found "default-k8s-diff-port-291511" server: "https://192.168.50.2:8444"
	I0930 21:07:56.223557   73707 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:07:56.233844   73707 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.2
	I0930 21:07:56.233876   73707 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:07:56.233889   73707 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:07:56.233970   73707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:56.280042   73707 cri.go:89] found id: ""
	I0930 21:07:56.280129   73707 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:07:56.304291   73707 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:07:56.317987   73707 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:07:56.318012   73707 kubeadm.go:157] found existing configuration files:
	
	I0930 21:07:56.318076   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0930 21:07:56.331377   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:07:56.331448   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:07:56.342380   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0930 21:07:56.354949   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:07:56.355030   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:07:56.368385   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0930 21:07:56.378798   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:07:56.378883   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:07:56.390167   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0930 21:07:56.400338   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:07:56.400413   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:07:56.410735   73707 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:07:56.426910   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:56.557126   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.682738   73707 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125574645s)
	I0930 21:07:57.682777   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.908684   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.983925   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:58.088822   73707 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:07:58.088930   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:58.589565   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:59.089483   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:59.110240   73707 api_server.go:72] duration metric: took 1.021416929s to wait for apiserver process to appear ...
	I0930 21:07:59.110279   73707 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:07:59.110328   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:07:59.110843   73707 api_server.go:269] stopped: https://192.168.50.2:8444/healthz: Get "https://192.168.50.2:8444/healthz": dial tcp 192.168.50.2:8444: connect: connection refused
	I0930 21:07:59.611045   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:07:58.250468   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:58.251041   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:58.251062   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:58.250979   74917 retry.go:31] will retry after 2.260492343s: waiting for machine to come up
	I0930 21:08:00.513613   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:00.514129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:00.514194   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:00.514117   74917 retry.go:31] will retry after 2.449694064s: waiting for machine to come up
	I0930 21:08:02.200888   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:02.200918   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:02.200930   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:02.240477   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:02.240513   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:02.611111   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:02.615548   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:02.615578   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:03.111216   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:03.118078   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:03.118102   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:03.610614   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:03.615203   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 200:
	ok
	I0930 21:08:03.621652   73707 api_server.go:141] control plane version: v1.31.1
	I0930 21:08:03.621680   73707 api_server.go:131] duration metric: took 4.511393989s to wait for apiserver health ...
	I0930 21:08:03.621689   73707 cni.go:84] Creating CNI manager for ""
	I0930 21:08:03.621694   73707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:03.624026   73707 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:08:00.416356   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:08:02.416469   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:08:02.916643   73375 node_ready.go:49] node "no-preload-997816" has status "Ready":"True"
	I0930 21:08:02.916668   73375 node_ready.go:38] duration metric: took 7.004576501s for node "no-preload-997816" to be "Ready" ...
	I0930 21:08:02.916679   73375 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:02.922833   73375 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:02.928873   73375 pod_ready.go:93] pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:02.928895   73375 pod_ready.go:82] duration metric: took 6.034388ms for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:02.928904   73375 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.934668   73375 pod_ready.go:103] pod "etcd-no-preload-997816" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:03.625416   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:08:03.640241   73707 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:08:03.664231   73707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:08:03.679372   73707 system_pods.go:59] 8 kube-system pods found
	I0930 21:08:03.679409   73707 system_pods.go:61] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:08:03.679425   73707 system_pods.go:61] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:08:03.679435   73707 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:08:03.679447   73707 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:08:03.679456   73707 system_pods.go:61] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 21:08:03.679466   73707 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:08:03.679472   73707 system_pods.go:61] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:08:03.679482   73707 system_pods.go:61] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 21:08:03.679490   73707 system_pods.go:74] duration metric: took 15.234407ms to wait for pod list to return data ...
	I0930 21:08:03.679509   73707 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:08:03.698332   73707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:08:03.698363   73707 node_conditions.go:123] node cpu capacity is 2
	I0930 21:08:03.698374   73707 node_conditions.go:105] duration metric: took 18.857709ms to run NodePressure ...
	I0930 21:08:03.698394   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:03.968643   73707 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:08:03.974075   73707 kubeadm.go:739] kubelet initialised
	I0930 21:08:03.974098   73707 kubeadm.go:740] duration metric: took 5.424573ms waiting for restarted kubelet to initialise ...
	I0930 21:08:03.974105   73707 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:03.982157   73707 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:03.989298   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.989329   73707 pod_ready.go:82] duration metric: took 7.140381ms for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:03.989338   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.989345   73707 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:03.995739   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.995773   73707 pod_ready.go:82] duration metric: took 6.418854ms for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:03.995787   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.995797   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.002071   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.002093   73707 pod_ready.go:82] duration metric: took 6.287919ms for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.002104   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.002110   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.071732   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.071760   73707 pod_ready.go:82] duration metric: took 69.643681ms for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.071771   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.071777   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.468580   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-proxy-kwp22" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.468605   73707 pod_ready.go:82] duration metric: took 396.820558ms for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.468614   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-proxy-kwp22" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.468620   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.868042   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.868067   73707 pod_ready.go:82] duration metric: took 399.438278ms for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.868078   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.868085   73707 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.267893   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:05.267925   73707 pod_ready.go:82] duration metric: took 399.831615ms for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:05.267937   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:05.267945   73707 pod_ready.go:39] duration metric: took 1.293832472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:05.267960   73707 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:08:05.282162   73707 ops.go:34] apiserver oom_adj: -16
	I0930 21:08:05.282188   73707 kubeadm.go:597] duration metric: took 9.074027172s to restartPrimaryControlPlane
	I0930 21:08:05.282199   73707 kubeadm.go:394] duration metric: took 9.131777336s to StartCluster
	I0930 21:08:05.282216   73707 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:05.282338   73707 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:08:05.283862   73707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:05.284135   73707 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:08:05.284201   73707 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:08:05.284287   73707 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284305   73707 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.284313   73707 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:08:05.284340   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.284339   73707 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284385   73707 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:08:05.284399   73707 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-291511"
	I0930 21:08:05.284359   73707 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284432   73707 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.284448   73707 addons.go:243] addon metrics-server should already be in state true
	I0930 21:08:05.284486   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.284739   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284760   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284784   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.284794   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.284890   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284931   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.286020   73707 out.go:177] * Verifying Kubernetes components...
	I0930 21:08:05.287268   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:05.302045   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0930 21:08:05.302587   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.303190   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.303219   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.303631   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.304213   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.304258   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.304484   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0930 21:08:05.304676   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0930 21:08:05.304884   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.305175   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.305353   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.305377   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.305642   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.305660   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.305724   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.305933   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.306016   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.306580   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.306623   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.309757   73707 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.309778   73707 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:08:05.309805   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.310163   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.310208   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.320335   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0930 21:08:05.320928   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.321496   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.321520   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.321922   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.322082   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.324111   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.325867   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0930 21:08:05.325879   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0930 21:08:05.326252   73707 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:08:05.326337   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.326280   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.326847   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.326862   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.326982   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.326999   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.327239   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.327313   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.327467   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:08:05.327485   73707 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:08:05.327507   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.327597   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.327778   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.327806   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.329862   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.331454   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.331654   73707 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:05.331959   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.331996   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.332184   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.332355   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.332577   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.332699   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.332956   73707 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:08:05.332972   73707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:08:05.332990   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.336234   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.336634   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.336661   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.336885   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.337134   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.337271   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.337447   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.345334   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34613
	I0930 21:08:05.345908   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.346393   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.346424   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.346749   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.346887   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.348836   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.349033   73707 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:08:05.349048   73707 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:08:05.349067   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.351835   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.352222   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.352277   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.352401   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.352644   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.352786   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.352886   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.475274   73707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:05.496035   73707 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-291511" to be "Ready" ...
	I0930 21:08:05.564715   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:08:05.574981   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:08:05.575006   73707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:08:05.613799   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:08:05.613822   73707 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:08:05.618503   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:08:05.689563   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:08:05.689588   73707 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:08:05.769327   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:08:06.831657   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266911261s)
	I0930 21:08:06.831717   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.831727   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.831735   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.213199657s)
	I0930 21:08:06.831780   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.831797   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832054   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832071   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832079   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.832086   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832146   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832164   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832182   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832195   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.832203   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832291   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832305   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832316   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832477   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832483   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832512   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.838509   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.838534   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.838786   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.838801   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.838806   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.956747   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.187373699s)
	I0930 21:08:06.956803   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.956819   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.957097   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.958516   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.958531   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.958542   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.958548   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.958842   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.958863   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.958873   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.958875   73707 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-291511"
	I0930 21:08:06.961299   73707 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0930 21:08:02.965767   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:02.966135   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:02.966157   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:02.966086   74917 retry.go:31] will retry after 2.951226221s: waiting for machine to come up
	I0930 21:08:05.919389   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:05.919894   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:05.919937   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:05.919827   74917 retry.go:31] will retry after 2.747969391s: waiting for machine to come up
	I0930 21:08:09.916514   73256 start.go:364] duration metric: took 52.875691449s to acquireMachinesLock for "embed-certs-256103"
	I0930 21:08:09.916583   73256 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:08:09.916592   73256 fix.go:54] fixHost starting: 
	I0930 21:08:09.916972   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:09.917000   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:09.935009   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0930 21:08:09.935493   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:09.936052   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:08:09.936073   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:09.936443   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:09.936617   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:09.936762   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:08:09.938608   73256 fix.go:112] recreateIfNeeded on embed-certs-256103: state=Stopped err=<nil>
	I0930 21:08:09.938639   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	W0930 21:08:09.938811   73256 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:08:09.940789   73256 out.go:177] * Restarting existing kvm2 VM for "embed-certs-256103" ...
	I0930 21:08:05.936626   73375 pod_ready.go:93] pod "etcd-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:05.936660   73375 pod_ready.go:82] duration metric: took 3.007747597s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.936674   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.942154   73375 pod_ready.go:93] pod "kube-apiserver-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:05.942196   73375 pod_ready.go:82] duration metric: took 5.502965ms for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.942209   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.949366   73375 pod_ready.go:93] pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.949402   73375 pod_ready.go:82] duration metric: took 1.007183809s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.949413   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.955060   73375 pod_ready.go:93] pod "kube-proxy-klcv8" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.955088   73375 pod_ready.go:82] duration metric: took 5.667172ms for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.955100   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.961684   73375 pod_ready.go:93] pod "kube-scheduler-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.961706   73375 pod_ready.go:82] duration metric: took 6.597856ms for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.961718   73375 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:08.967525   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:06.962594   73707 addons.go:510] duration metric: took 1.678396512s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0930 21:08:07.499805   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:09.500771   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:08.671179   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.671686   73900 main.go:141] libmachine: (old-k8s-version-621406) Found IP for machine: 192.168.72.159
	I0930 21:08:08.671711   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserving static IP address...
	I0930 21:08:08.671729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has current primary IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.672178   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.672220   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | skip adding static IP to network mk-old-k8s-version-621406 - found existing host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"}
	I0930 21:08:08.672231   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserved static IP address: 192.168.72.159
	I0930 21:08:08.672246   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting for SSH to be available...
	I0930 21:08:08.672254   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Getting to WaitForSSH function...
	I0930 21:08:08.674566   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.674931   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.674969   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.675128   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH client type: external
	I0930 21:08:08.675170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa (-rw-------)
	I0930 21:08:08.675212   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:08:08.675229   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | About to run SSH command:
	I0930 21:08:08.675244   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | exit 0
	I0930 21:08:08.799368   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | SSH cmd err, output: <nil>: 
	I0930 21:08:08.799751   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetConfigRaw
	I0930 21:08:08.800421   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:08.803151   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803596   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.803620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803922   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:08:08.804195   73900 machine.go:93] provisionDockerMachine start ...
	I0930 21:08:08.804246   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:08.804502   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.806822   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807240   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.807284   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807521   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.807735   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.807890   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.808077   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.808239   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.808480   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.808493   73900 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:08:08.912058   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:08:08.912135   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912407   73900 buildroot.go:166] provisioning hostname "old-k8s-version-621406"
	I0930 21:08:08.912432   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912662   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.915366   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.915750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915892   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.916107   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916330   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916492   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.916673   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.916932   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.916957   73900 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-621406 && echo "old-k8s-version-621406" | sudo tee /etc/hostname
	I0930 21:08:09.034260   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-621406
	
	I0930 21:08:09.034296   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.037149   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037509   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.037538   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037799   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.037986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038163   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038327   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.038473   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.038695   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.038714   73900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-621406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-621406/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-621406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:08:09.152190   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:08:09.152228   73900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:08:09.152255   73900 buildroot.go:174] setting up certificates
	I0930 21:08:09.152275   73900 provision.go:84] configureAuth start
	I0930 21:08:09.152288   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:09.152577   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.155203   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155589   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.155620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155783   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.157964   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158362   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.158392   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158520   73900 provision.go:143] copyHostCerts
	I0930 21:08:09.158592   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:08:09.158605   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:08:09.158704   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:08:09.158851   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:08:09.158864   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:08:09.158895   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:08:09.158970   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:08:09.158977   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:08:09.158996   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:08:09.159054   73900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-621406 san=[127.0.0.1 192.168.72.159 localhost minikube old-k8s-version-621406]
	I0930 21:08:09.301267   73900 provision.go:177] copyRemoteCerts
	I0930 21:08:09.301322   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:08:09.301349   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.304344   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304766   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.304796   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304998   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.305187   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.305321   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.305439   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.390851   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0930 21:08:09.415712   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 21:08:09.439567   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:08:09.463427   73900 provision.go:87] duration metric: took 311.139024ms to configureAuth
	I0930 21:08:09.463459   73900 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:08:09.463713   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:08:09.463809   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.466757   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.467160   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467326   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.467513   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467694   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467843   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.468004   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.468175   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.468190   73900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:08:09.684657   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:08:09.684684   73900 machine.go:96] duration metric: took 880.473418ms to provisionDockerMachine
	I0930 21:08:09.684698   73900 start.go:293] postStartSetup for "old-k8s-version-621406" (driver="kvm2")
	I0930 21:08:09.684709   73900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:08:09.684730   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.685075   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:08:09.685114   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.688051   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688517   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.688542   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688725   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.688928   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.689070   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.689265   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.770572   73900 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:08:09.775149   73900 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:08:09.775181   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:08:09.775268   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:08:09.775364   73900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:08:09.775453   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:08:09.784753   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:09.807989   73900 start.go:296] duration metric: took 123.276522ms for postStartSetup
	I0930 21:08:09.808033   73900 fix.go:56] duration metric: took 19.918922935s for fixHost
	I0930 21:08:09.808053   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.811242   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811656   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.811692   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811852   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.812064   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812239   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812380   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.812522   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.812704   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.812719   73900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:08:09.916349   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730489.889323893
	
	I0930 21:08:09.916376   73900 fix.go:216] guest clock: 1727730489.889323893
	I0930 21:08:09.916384   73900 fix.go:229] Guest: 2024-09-30 21:08:09.889323893 +0000 UTC Remote: 2024-09-30 21:08:09.808037625 +0000 UTC m=+267.093327666 (delta=81.286268ms)
	I0930 21:08:09.916403   73900 fix.go:200] guest clock delta is within tolerance: 81.286268ms
	I0930 21:08:09.916408   73900 start.go:83] releasing machines lock for "old-k8s-version-621406", held for 20.027328296s
	I0930 21:08:09.916440   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.916766   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.919729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920070   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.920105   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920238   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.920831   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921050   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921182   73900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:08:09.921235   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.921328   73900 ssh_runner.go:195] Run: cat /version.json
	I0930 21:08:09.921351   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.924258   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924650   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.924695   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924805   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.924986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.925176   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925206   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.925341   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.925405   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.925534   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925698   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925829   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:10.043500   73900 ssh_runner.go:195] Run: systemctl --version
	I0930 21:08:10.051029   73900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:08:10.199844   73900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:08:10.206433   73900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:08:10.206519   73900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:08:10.223346   73900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:08:10.223375   73900 start.go:495] detecting cgroup driver to use...
	I0930 21:08:10.223449   73900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:08:10.241056   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:08:10.257197   73900 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:08:10.257261   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:08:10.271847   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:08:10.287465   73900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:08:10.419248   73900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:08:10.583440   73900 docker.go:233] disabling docker service ...
	I0930 21:08:10.583518   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:08:10.599561   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:08:10.613321   73900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:08:10.763071   73900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:08:10.891222   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:08:10.906985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:08:10.927838   73900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0930 21:08:10.927911   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.940002   73900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:08:10.940084   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.953143   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.965922   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.985782   73900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:08:11.001825   73900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:08:11.015777   73900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:08:11.015835   73900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:08:11.034821   73900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:08:11.049855   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:11.203755   73900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:08:11.312949   73900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:08:11.313060   73900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:08:11.319280   73900 start.go:563] Will wait 60s for crictl version
	I0930 21:08:11.319355   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:11.323826   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:08:11.374934   73900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:08:11.375023   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.415466   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.449622   73900 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0930 21:08:11.450773   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:11.454019   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454504   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:11.454534   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454807   73900 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0930 21:08:11.459034   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:11.473162   73900 kubeadm.go:883] updating cluster {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:08:11.473294   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:08:11.473367   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:11.518200   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:11.518275   73900 ssh_runner.go:195] Run: which lz4
	I0930 21:08:11.522442   73900 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:08:11.526704   73900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:08:11.526752   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0930 21:08:09.942356   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Start
	I0930 21:08:09.942591   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring networks are active...
	I0930 21:08:09.943619   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring network default is active
	I0930 21:08:09.944145   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring network mk-embed-certs-256103 is active
	I0930 21:08:09.944659   73256 main.go:141] libmachine: (embed-certs-256103) Getting domain xml...
	I0930 21:08:09.945567   73256 main.go:141] libmachine: (embed-certs-256103) Creating domain...
	I0930 21:08:11.376075   73256 main.go:141] libmachine: (embed-certs-256103) Waiting to get IP...
	I0930 21:08:11.377049   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.377588   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.377687   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.377579   75193 retry.go:31] will retry after 219.057799ms: waiting for machine to come up
	I0930 21:08:11.598062   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.598531   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.598568   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.598491   75193 retry.go:31] will retry after 288.150233ms: waiting for machine to come up
	I0930 21:08:11.887894   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.888719   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.888749   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.888678   75193 retry.go:31] will retry after 422.70153ms: waiting for machine to come up
	I0930 21:08:12.313280   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:12.313761   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:12.313790   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:12.313728   75193 retry.go:31] will retry after 403.507934ms: waiting for machine to come up
	I0930 21:08:12.719305   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:12.719705   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:12.719740   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:12.719683   75193 retry.go:31] will retry after 616.261723ms: waiting for machine to come up
	I0930 21:08:13.337223   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:13.337759   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:13.337809   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:13.337727   75193 retry.go:31] will retry after 715.496762ms: waiting for machine to come up
	I0930 21:08:14.054455   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:14.055118   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:14.055155   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:14.055041   75193 retry.go:31] will retry after 1.12512788s: waiting for machine to come up
	I0930 21:08:10.970621   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:13.468795   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:11.501276   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:12.501748   73707 node_ready.go:49] node "default-k8s-diff-port-291511" has status "Ready":"True"
	I0930 21:08:12.501784   73707 node_ready.go:38] duration metric: took 7.005705696s for node "default-k8s-diff-port-291511" to be "Ready" ...
	I0930 21:08:12.501797   73707 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:12.510080   73707 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:12.518496   73707 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:12.518522   73707 pod_ready.go:82] duration metric: took 8.414761ms for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:12.518535   73707 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.526615   73707 pod_ready.go:93] pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:14.526653   73707 pod_ready.go:82] duration metric: took 2.00810944s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.526666   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.533536   73707 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:14.533574   73707 pod_ready.go:82] duration metric: took 6.898769ms for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.533596   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.043003   73707 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.043034   73707 pod_ready.go:82] duration metric: took 509.429109ms for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.043048   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.049645   73707 pod_ready.go:93] pod "kube-proxy-kwp22" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.049676   73707 pod_ready.go:82] duration metric: took 6.618441ms for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.049688   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:13.134916   73900 crio.go:462] duration metric: took 1.612498859s to copy over tarball
	I0930 21:08:13.135038   73900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:08:16.170053   73900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.034985922s)
	I0930 21:08:16.170080   73900 crio.go:469] duration metric: took 3.035125251s to extract the tarball
	I0930 21:08:16.170088   73900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:08:16.213559   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:16.249853   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:16.249876   73900 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 21:08:16.249943   73900 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.249970   73900 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.249987   73900 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.250030   73900 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0930 21:08:16.250031   73900 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.250047   73900 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.250049   73900 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.250083   73900 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0930 21:08:16.251771   73900 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.251768   73900 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.251832   73900 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.251854   73900 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251891   73900 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.252031   73900 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.456847   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.468006   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0930 21:08:16.516253   73900 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0930 21:08:16.516294   73900 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.516336   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.524699   73900 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0930 21:08:16.524743   73900 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0930 21:08:16.524787   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.525738   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.529669   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.561946   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.569090   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.570589   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.571007   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.581971   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.587609   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.630323   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.711058   73900 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0930 21:08:16.711124   73900 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.711190   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.749473   73900 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0930 21:08:16.749521   73900 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.749585   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.769974   73900 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0930 21:08:16.770016   73900 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.770050   73900 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0930 21:08:16.770075   73900 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0930 21:08:16.770087   73900 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.770104   73900 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.770142   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770160   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.770064   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770144   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.788241   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.788292   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.788294   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.788339   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.847727   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0930 21:08:16.847798   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.847894   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.938964   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.939000   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.939053   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0930 21:08:16.939090   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.965556   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.965620   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.020497   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:17.074893   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:17.074950   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.090489   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:17.174117   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0930 21:08:17.174183   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0930 21:08:17.185553   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0930 21:08:17.185619   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0930 21:08:17.506064   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:17.650598   73900 cache_images.go:92] duration metric: took 1.400704992s to LoadCachedImages
	W0930 21:08:17.650695   73900 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0930 21:08:17.650710   73900 kubeadm.go:934] updating node { 192.168.72.159 8443 v1.20.0 crio true true} ...
	I0930 21:08:17.650834   73900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-621406 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:08:17.650922   73900 ssh_runner.go:195] Run: crio config
	I0930 21:08:17.710096   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:08:17.710124   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:17.710139   73900 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:08:17.710164   73900 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-621406 NodeName:old-k8s-version-621406 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0930 21:08:17.710349   73900 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-621406"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:08:17.710425   73900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0930 21:08:17.721028   73900 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:08:17.721111   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:08:17.731462   73900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0930 21:08:17.749715   73900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:08:15.182186   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:15.182722   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:15.182751   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:15.182673   75193 retry.go:31] will retry after 1.385891549s: waiting for machine to come up
	I0930 21:08:16.569882   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:16.570365   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:16.570386   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:16.570309   75193 retry.go:31] will retry after 1.417579481s: waiting for machine to come up
	I0930 21:08:17.989161   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:17.989876   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:17.989905   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:17.989818   75193 retry.go:31] will retry after 1.981651916s: waiting for machine to come up
	I0930 21:08:15.471221   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:17.969140   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:19.969688   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:15.300639   73707 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.300666   73707 pod_ready.go:82] duration metric: took 250.968899ms for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.300679   73707 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:17.349449   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:19.809813   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:17.767565   73900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0930 21:08:17.786411   73900 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0930 21:08:17.790338   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:17.803957   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:17.948898   73900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:17.969102   73900 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406 for IP: 192.168.72.159
	I0930 21:08:17.969133   73900 certs.go:194] generating shared ca certs ...
	I0930 21:08:17.969150   73900 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:17.969338   73900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:08:17.969387   73900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:08:17.969400   73900 certs.go:256] generating profile certs ...
	I0930 21:08:17.969543   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/client.key
	I0930 21:08:17.969621   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key.f3dc5056
	I0930 21:08:17.969674   73900 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key
	I0930 21:08:17.969833   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:08:17.969875   73900 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:08:17.969886   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:08:17.969926   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:08:17.969961   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:08:17.969999   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:08:17.970055   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:17.970794   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:08:18.007954   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:08:18.041538   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:08:18.077886   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:08:18.118644   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0930 21:08:18.151418   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:08:18.199572   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:08:18.235795   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:08:18.272729   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:08:18.298727   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:08:18.324074   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:08:18.351209   73900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:08:18.372245   73900 ssh_runner.go:195] Run: openssl version
	I0930 21:08:18.380047   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:08:18.395332   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401407   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401479   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.407744   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:08:18.422801   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:08:18.437946   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443864   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443938   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.451554   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:08:18.466856   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:08:18.479324   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484321   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484383   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.490341   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:08:18.503117   73900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:08:18.507986   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:08:18.514974   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:08:18.522140   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:08:18.529366   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:08:18.536056   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:08:18.542787   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:08:18.550311   73900 kubeadm.go:392] StartCluster: {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:08:18.550431   73900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:08:18.550498   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.593041   73900 cri.go:89] found id: ""
	I0930 21:08:18.593116   73900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:08:18.603410   73900 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:08:18.603432   73900 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:08:18.603479   73900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:08:18.614635   73900 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:08:18.615758   73900 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-621406" does not appear in /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:08:18.616488   73900 kubeconfig.go:62] /home/jenkins/minikube-integration/19736-7672/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-621406" cluster setting kubeconfig missing "old-k8s-version-621406" context setting]
	I0930 21:08:18.617394   73900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:18.644144   73900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:08:18.655764   73900 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.159
	I0930 21:08:18.655806   73900 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:08:18.655819   73900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:08:18.655877   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.699283   73900 cri.go:89] found id: ""
	I0930 21:08:18.699376   73900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:08:18.715248   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:08:18.724905   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:08:18.724945   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:08:18.724990   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:08:18.735611   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:08:18.735682   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:08:18.745604   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:08:18.755199   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:08:18.755261   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:08:18.765450   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.775187   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:08:18.775268   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.788080   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:08:18.800668   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:08:18.800727   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:08:18.814084   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:08:18.823785   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:18.961698   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.495418   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.713653   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.812667   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.921314   73900 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:08:19.921414   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.422349   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.922222   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.422364   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.921493   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:22.421640   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:19.973478   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:19.973916   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:19.973946   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:19.973868   75193 retry.go:31] will retry after 2.33355272s: waiting for machine to come up
	I0930 21:08:22.308828   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:22.309471   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:22.309498   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:22.309367   75193 retry.go:31] will retry after 3.484225075s: waiting for machine to come up
	I0930 21:08:21.970954   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:24.467778   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:22.310464   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:24.806425   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:22.922418   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.421851   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.921502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.422346   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.922000   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.422290   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.922213   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.422100   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.922239   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:27.421729   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.795265   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:25.795755   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:25.795781   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:25.795707   75193 retry.go:31] will retry after 2.983975719s: waiting for machine to come up
	I0930 21:08:28.780767   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.781201   73256 main.go:141] libmachine: (embed-certs-256103) Found IP for machine: 192.168.39.90
	I0930 21:08:28.781223   73256 main.go:141] libmachine: (embed-certs-256103) Reserving static IP address...
	I0930 21:08:28.781237   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has current primary IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.781655   73256 main.go:141] libmachine: (embed-certs-256103) Reserved static IP address: 192.168.39.90
	I0930 21:08:28.781679   73256 main.go:141] libmachine: (embed-certs-256103) Waiting for SSH to be available...
	I0930 21:08:28.781697   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "embed-certs-256103", mac: "52:54:00:7a:01:01", ip: "192.168.39.90"} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.781724   73256 main.go:141] libmachine: (embed-certs-256103) DBG | skip adding static IP to network mk-embed-certs-256103 - found existing host DHCP lease matching {name: "embed-certs-256103", mac: "52:54:00:7a:01:01", ip: "192.168.39.90"}
	I0930 21:08:28.781735   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Getting to WaitForSSH function...
	I0930 21:08:28.784310   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.784703   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.784737   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.784861   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Using SSH client type: external
	I0930 21:08:28.784899   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa (-rw-------)
	I0930 21:08:28.784933   73256 main.go:141] libmachine: (embed-certs-256103) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:08:28.784953   73256 main.go:141] libmachine: (embed-certs-256103) DBG | About to run SSH command:
	I0930 21:08:28.784970   73256 main.go:141] libmachine: (embed-certs-256103) DBG | exit 0
	I0930 21:08:28.911300   73256 main.go:141] libmachine: (embed-certs-256103) DBG | SSH cmd err, output: <nil>: 
	I0930 21:08:28.911716   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetConfigRaw
	I0930 21:08:28.912335   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:28.914861   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.915283   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.915304   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.915620   73256 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/config.json ...
	I0930 21:08:28.915874   73256 machine.go:93] provisionDockerMachine start ...
	I0930 21:08:28.915902   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:28.916117   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:28.918357   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.918661   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.918696   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.918813   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:28.918992   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:28.919143   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:28.919296   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:28.919472   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:28.919680   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:28.919691   73256 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:08:29.032537   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:08:29.032579   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.032830   73256 buildroot.go:166] provisioning hostname "embed-certs-256103"
	I0930 21:08:29.032857   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.033039   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.035951   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.036403   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.036435   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.036598   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.036795   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.037002   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.037175   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.037339   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.037538   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.037556   73256 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-256103 && echo "embed-certs-256103" | sudo tee /etc/hostname
	I0930 21:08:29.163250   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-256103
	
	I0930 21:08:29.163278   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.165937   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.166260   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.166296   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.166529   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.166722   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.166913   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.167055   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.167223   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.167454   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.167477   73256 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-256103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-256103/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-256103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:08:29.288197   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:08:29.288236   73256 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:08:29.288292   73256 buildroot.go:174] setting up certificates
	I0930 21:08:29.288307   73256 provision.go:84] configureAuth start
	I0930 21:08:29.288322   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.288589   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:29.291598   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.292026   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.292059   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.292247   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.294760   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.295144   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.295169   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.295421   73256 provision.go:143] copyHostCerts
	I0930 21:08:29.295497   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:08:29.295510   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:08:29.295614   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:08:29.295743   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:08:29.295754   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:08:29.295782   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:08:29.295855   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:08:29.295864   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:08:29.295886   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:08:29.295948   73256 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.embed-certs-256103 san=[127.0.0.1 192.168.39.90 embed-certs-256103 localhost minikube]
	I0930 21:08:26.468058   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:28.468510   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:26.808360   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:29.307500   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:29.742069   73256 provision.go:177] copyRemoteCerts
	I0930 21:08:29.742134   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:08:29.742156   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.745411   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.745805   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.745835   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.746023   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.746215   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.746351   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.746557   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:29.833888   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:08:29.857756   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0930 21:08:29.883087   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:08:29.905795   73256 provision.go:87] duration metric: took 617.470984ms to configureAuth
	I0930 21:08:29.905831   73256 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:08:29.906028   73256 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:08:29.906098   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.908911   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.909307   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.909335   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.909524   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.909711   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.909876   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.909996   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.910157   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.910429   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.910454   73256 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:08:30.140191   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:08:30.140217   73256 machine.go:96] duration metric: took 1.224326296s to provisionDockerMachine
	I0930 21:08:30.140227   73256 start.go:293] postStartSetup for "embed-certs-256103" (driver="kvm2")
	I0930 21:08:30.140237   73256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:08:30.140252   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.140624   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:08:30.140648   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.143906   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.144300   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.144339   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.144498   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.144695   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.144846   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.145052   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.230069   73256 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:08:30.233845   73256 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:08:30.233868   73256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:08:30.233948   73256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:08:30.234050   73256 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:08:30.234168   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:08:30.243066   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:30.266197   73256 start.go:296] duration metric: took 125.955153ms for postStartSetup
	I0930 21:08:30.266234   73256 fix.go:56] duration metric: took 20.349643145s for fixHost
	I0930 21:08:30.266252   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.269025   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.269405   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.269433   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.269576   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.269784   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.269910   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.270042   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.270176   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:30.270380   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:30.270392   73256 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:08:30.380023   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730510.354607586
	
	I0930 21:08:30.380057   73256 fix.go:216] guest clock: 1727730510.354607586
	I0930 21:08:30.380067   73256 fix.go:229] Guest: 2024-09-30 21:08:30.354607586 +0000 UTC Remote: 2024-09-30 21:08:30.266237543 +0000 UTC m=+355.815232104 (delta=88.370043ms)
	I0930 21:08:30.380085   73256 fix.go:200] guest clock delta is within tolerance: 88.370043ms
	I0930 21:08:30.380091   73256 start.go:83] releasing machines lock for "embed-certs-256103", held for 20.463544222s
	I0930 21:08:30.380113   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.380429   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:30.382992   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.383349   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.383369   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.383518   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384071   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384245   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384310   73256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:08:30.384374   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.384442   73256 ssh_runner.go:195] Run: cat /version.json
	I0930 21:08:30.384464   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.387098   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387342   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387413   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.387435   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387633   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.387762   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.387783   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387828   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.387931   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.388003   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.388058   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.388159   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.388208   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.388347   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.510981   73256 ssh_runner.go:195] Run: systemctl --version
	I0930 21:08:30.517215   73256 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:08:30.663491   73256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:08:30.669568   73256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:08:30.669652   73256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:08:30.686640   73256 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:08:30.686663   73256 start.go:495] detecting cgroup driver to use...
	I0930 21:08:30.686737   73256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:08:30.703718   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:08:30.718743   73256 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:08:30.718807   73256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:08:30.733695   73256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:08:30.748690   73256 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:08:30.878084   73256 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:08:31.040955   73256 docker.go:233] disabling docker service ...
	I0930 21:08:31.041030   73256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:08:31.055212   73256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:08:31.067968   73256 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:08:31.185043   73256 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:08:31.300909   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:08:31.315167   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:08:31.333483   73256 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:08:31.333537   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.343599   73256 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:08:31.343694   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.353739   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.363993   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.375183   73256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:08:31.385478   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.395632   73256 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.412995   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.423277   73256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:08:31.433183   73256 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:08:31.433253   73256 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:08:31.446796   73256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:08:31.456912   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:31.571729   73256 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:08:31.663944   73256 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:08:31.664019   73256 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:08:31.669128   73256 start.go:563] Will wait 60s for crictl version
	I0930 21:08:31.669191   73256 ssh_runner.go:195] Run: which crictl
	I0930 21:08:31.672922   73256 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:08:31.709488   73256 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:08:31.709596   73256 ssh_runner.go:195] Run: crio --version
	I0930 21:08:31.738743   73256 ssh_runner.go:195] Run: crio --version
	I0930 21:08:31.771638   73256 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:08:27.922374   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.921870   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.421786   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.921804   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.421482   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.921969   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.422241   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.922148   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:32.421504   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.773186   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:31.776392   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:31.776770   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:31.776810   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:31.777016   73256 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 21:08:31.781212   73256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:31.793839   73256 kubeadm.go:883] updating cluster {Name:embed-certs-256103 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:08:31.793957   73256 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:08:31.794015   73256 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:31.834036   73256 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:08:31.834094   73256 ssh_runner.go:195] Run: which lz4
	I0930 21:08:31.837877   73256 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:08:31.842038   73256 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:08:31.842073   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 21:08:33.150975   73256 crio.go:462] duration metric: took 1.313131374s to copy over tarball
	I0930 21:08:33.151080   73256 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:08:30.469523   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:32.469562   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:34.969818   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:31.307560   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:33.308130   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:32.921516   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.421576   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.922082   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.421599   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.922178   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.422199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.922061   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.421860   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.921513   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.422162   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.294750   73256 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.143629494s)
	I0930 21:08:35.294785   73256 crio.go:469] duration metric: took 2.143777794s to extract the tarball
	I0930 21:08:35.294794   73256 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:08:35.340151   73256 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:35.385329   73256 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 21:08:35.385359   73256 cache_images.go:84] Images are preloaded, skipping loading
	I0930 21:08:35.385366   73256 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.31.1 crio true true} ...
	I0930 21:08:35.385463   73256 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-256103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:08:35.385536   73256 ssh_runner.go:195] Run: crio config
	I0930 21:08:35.433043   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:08:35.433072   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:35.433084   73256 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:08:35.433113   73256 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-256103 NodeName:embed-certs-256103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:08:35.433277   73256 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-256103"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:08:35.433348   73256 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:08:35.443627   73256 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:08:35.443713   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:08:35.453095   73256 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0930 21:08:35.469517   73256 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:08:35.486869   73256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0930 21:08:35.504871   73256 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0930 21:08:35.508507   73256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:35.521994   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:35.641971   73256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:35.657660   73256 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103 for IP: 192.168.39.90
	I0930 21:08:35.657686   73256 certs.go:194] generating shared ca certs ...
	I0930 21:08:35.657705   73256 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:35.657878   73256 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:08:35.657941   73256 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:08:35.657954   73256 certs.go:256] generating profile certs ...
	I0930 21:08:35.658095   73256 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/client.key
	I0930 21:08:35.658177   73256 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.key.52e83f0c
	I0930 21:08:35.658230   73256 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.key
	I0930 21:08:35.658391   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:08:35.658431   73256 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:08:35.658443   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:08:35.658476   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:08:35.658509   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:08:35.658539   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:08:35.658586   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:35.659279   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:08:35.695254   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:08:35.718948   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:08:35.742442   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:08:35.765859   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0930 21:08:35.792019   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:08:35.822081   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:08:35.845840   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:08:35.871635   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:08:35.896069   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:08:35.921595   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:08:35.946620   73256 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:08:35.963340   73256 ssh_runner.go:195] Run: openssl version
	I0930 21:08:35.970540   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:08:35.982269   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.987494   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.987646   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.994312   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:08:36.006173   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:08:36.017605   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.022126   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.022190   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.027806   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:08:36.038388   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:08:36.048818   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.053230   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.053296   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.058713   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:08:36.070806   73256 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:08:36.075521   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:08:36.081310   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:08:36.086935   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:08:36.092990   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:08:36.098783   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:08:36.104354   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:08:36.110289   73256 kubeadm.go:392] StartCluster: {Name:embed-certs-256103 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:08:36.110411   73256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:08:36.110495   73256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:36.153770   73256 cri.go:89] found id: ""
	I0930 21:08:36.153852   73256 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:08:36.164301   73256 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:08:36.164320   73256 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:08:36.164363   73256 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:08:36.173860   73256 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:08:36.174950   73256 kubeconfig.go:125] found "embed-certs-256103" server: "https://192.168.39.90:8443"
	I0930 21:08:36.177584   73256 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:08:36.186946   73256 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I0930 21:08:36.186984   73256 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:08:36.186998   73256 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:08:36.187045   73256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:36.223259   73256 cri.go:89] found id: ""
	I0930 21:08:36.223328   73256 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:08:36.239321   73256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:08:36.248508   73256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:08:36.248528   73256 kubeadm.go:157] found existing configuration files:
	
	I0930 21:08:36.248571   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:08:36.257483   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:08:36.257537   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:08:36.266792   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:08:36.275626   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:08:36.275697   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:08:36.285000   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:08:36.293923   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:08:36.293977   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:08:36.303990   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:08:36.313104   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:08:36.313158   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:08:36.322423   73256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:08:36.332005   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:36.457666   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.309316   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.533114   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.602999   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.692027   73256 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:08:37.692117   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.192813   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.692777   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.192862   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.469941   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:39.506753   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:35.311295   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:37.806923   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:39.808338   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:37.921497   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.422360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.922305   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.422480   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.922279   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.422089   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.922021   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.421727   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.921519   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:42.422193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.692193   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.192178   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.209649   73256 api_server.go:72] duration metric: took 2.517618424s to wait for apiserver process to appear ...
	I0930 21:08:40.209676   73256 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:08:40.209699   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.034828   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:43.034857   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:43.034871   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.080073   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:43.080107   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:43.210448   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.217768   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:43.217799   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:43.710066   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.722379   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:43.722428   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:44.209939   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:44.219468   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:44.219500   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:44.709767   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:44.714130   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I0930 21:08:44.720194   73256 api_server.go:141] control plane version: v1.31.1
	I0930 21:08:44.720221   73256 api_server.go:131] duration metric: took 4.510539442s to wait for apiserver health ...
	I0930 21:08:44.720230   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:08:44.720236   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:44.721740   73256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:08:41.968377   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:44.469477   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:41.808473   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:43.808575   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:42.922495   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.422250   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.922413   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.421962   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.921682   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.422144   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.922206   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.422020   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.921960   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:47.422296   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.722947   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:08:44.733426   73256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:08:44.750426   73256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:08:44.761259   73256 system_pods.go:59] 8 kube-system pods found
	I0930 21:08:44.761303   73256 system_pods.go:61] "coredns-7c65d6cfc9-h6cl2" [548e3751-edc9-4232-87c2-2e64769ba332] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:08:44.761314   73256 system_pods.go:61] "etcd-embed-certs-256103" [6eef2e96-d4bf-4dd6-bd5c-bfb05c306182] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:08:44.761326   73256 system_pods.go:61] "kube-apiserver-embed-certs-256103" [81c02a52-aca7-4b9c-b7b1-680d27f48d40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:08:44.761335   73256 system_pods.go:61] "kube-controller-manager-embed-certs-256103" [752f0966-7718-4523-8ba6-affd41bc956e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:08:44.761346   73256 system_pods.go:61] "kube-proxy-fqvg2" [284a63a1-d624-4bf3-8509-14ff0845f3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 21:08:44.761354   73256 system_pods.go:61] "kube-scheduler-embed-certs-256103" [6158a51d-82ae-490a-96d3-c0e61a3485f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:08:44.761363   73256 system_pods.go:61] "metrics-server-6867b74b74-hkp9m" [8774a772-bb72-4419-96fd-50ca5f48a5b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:08:44.761374   73256 system_pods.go:61] "storage-provisioner" [9649e71d-cd21-4846-bf66-1c5b469500ba] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 21:08:44.761385   73256 system_pods.go:74] duration metric: took 10.935916ms to wait for pod list to return data ...
	I0930 21:08:44.761397   73256 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:08:44.771745   73256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:08:44.771777   73256 node_conditions.go:123] node cpu capacity is 2
	I0930 21:08:44.771789   73256 node_conditions.go:105] duration metric: took 10.386814ms to run NodePressure ...
	I0930 21:08:44.771810   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:45.064019   73256 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:08:45.070479   73256 kubeadm.go:739] kubelet initialised
	I0930 21:08:45.070508   73256 kubeadm.go:740] duration metric: took 6.461143ms waiting for restarted kubelet to initialise ...
	I0930 21:08:45.070517   73256 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:45.074627   73256 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.080873   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.080897   73256 pod_ready.go:82] duration metric: took 6.244301ms for pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.080906   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.080912   73256 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.086787   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "etcd-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.086818   73256 pod_ready.go:82] duration metric: took 5.898265ms for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.086829   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "etcd-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.086837   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.092860   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.092892   73256 pod_ready.go:82] duration metric: took 6.044766ms for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.092904   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.092912   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.154246   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.154271   73256 pod_ready.go:82] duration metric: took 61.348653ms for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.154281   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.154287   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fqvg2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.554606   73256 pod_ready.go:93] pod "kube-proxy-fqvg2" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:45.554630   73256 pod_ready.go:82] duration metric: took 400.335084ms for pod "kube-proxy-fqvg2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.554639   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:47.559998   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:46.968101   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:48.968649   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:46.307946   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:48.806624   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:47.921903   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.422535   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.921484   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.421909   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.922117   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.421606   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.921728   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.421600   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.921716   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:52.421873   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.561176   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:51.562227   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:54.060692   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:51.467375   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:53.473247   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:50.807821   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:53.307163   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:52.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.921496   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.421866   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.921995   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.421476   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.421660   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.922489   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:57.422291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.562740   73256 pod_ready.go:93] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:54.562765   73256 pod_ready.go:82] duration metric: took 9.008120147s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:54.562775   73256 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:56.570517   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:59.070065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:55.969724   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:58.467585   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:55.807669   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:58.305837   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:57.921737   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.922007   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.422173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.921803   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.421596   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.922123   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.422186   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.921898   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:02.421894   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.070940   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:03.569053   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:00.469160   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.968692   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:00.308195   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.807474   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:04.808710   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.922329   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.421922   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.922360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.421875   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.922544   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.421939   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.921693   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.422056   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.921627   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:07.422125   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.070166   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:08.568945   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:05.467300   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.469409   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:09.968053   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.306237   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:09.306644   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.921687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.421694   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.922234   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.421817   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.921704   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.422030   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.921597   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.421700   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.922301   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:12.421567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.569444   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:13.069582   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:11.970180   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:14.469440   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:11.307287   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:13.307376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:12.922171   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.422423   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.921941   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.422494   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.922454   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.421776   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.922567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.421713   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.922449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:17.421644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.569398   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:18.069177   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:16.968663   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:19.468171   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:15.808689   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:18.307774   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:17.922098   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.922084   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.421717   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.922095   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:19.922178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:19.962975   73900 cri.go:89] found id: ""
	I0930 21:09:19.963002   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.963014   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:19.963020   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:19.963073   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:19.999741   73900 cri.go:89] found id: ""
	I0930 21:09:19.999769   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.999777   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:19.999782   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:19.999840   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:20.035818   73900 cri.go:89] found id: ""
	I0930 21:09:20.035844   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.035856   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:20.035863   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:20.035924   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:20.072005   73900 cri.go:89] found id: ""
	I0930 21:09:20.072032   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.072042   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:20.072048   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:20.072110   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:20.108229   73900 cri.go:89] found id: ""
	I0930 21:09:20.108258   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.108314   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:20.108325   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:20.108383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:20.141331   73900 cri.go:89] found id: ""
	I0930 21:09:20.141388   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.141398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:20.141406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:20.141466   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:20.175133   73900 cri.go:89] found id: ""
	I0930 21:09:20.175161   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.175169   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:20.175175   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:20.175223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:20.210529   73900 cri.go:89] found id: ""
	I0930 21:09:20.210566   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.210578   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:20.210594   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:20.210608   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:20.261055   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:20.261095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:20.274212   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:20.274239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:20.406215   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:20.406246   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:20.406282   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:20.481758   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:20.481794   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:20.069672   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:22.569421   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:21.468616   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:23.468820   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:20.309317   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:22.807149   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:24.807293   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:23.019687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:23.033394   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:23.033450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:23.078558   73900 cri.go:89] found id: ""
	I0930 21:09:23.078592   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.078604   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:23.078611   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:23.078673   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:23.117833   73900 cri.go:89] found id: ""
	I0930 21:09:23.117860   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.117868   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:23.117875   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:23.117931   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:23.157299   73900 cri.go:89] found id: ""
	I0930 21:09:23.157337   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.157359   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:23.157367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:23.157438   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:23.196545   73900 cri.go:89] found id: ""
	I0930 21:09:23.196570   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.196579   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:23.196586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:23.196644   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:23.229359   73900 cri.go:89] found id: ""
	I0930 21:09:23.229390   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.229401   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:23.229409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:23.229471   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:23.264847   73900 cri.go:89] found id: ""
	I0930 21:09:23.264881   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.264893   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:23.264900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:23.264962   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:23.298657   73900 cri.go:89] found id: ""
	I0930 21:09:23.298687   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.298695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:23.298701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:23.298750   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:23.333787   73900 cri.go:89] found id: ""
	I0930 21:09:23.333816   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.333826   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:23.333836   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:23.333851   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:23.386311   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:23.386347   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:23.400096   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:23.400129   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:23.481724   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:23.481748   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:23.481780   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:23.561080   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:23.561119   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.122460   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:26.136409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:26.136495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:26.170785   73900 cri.go:89] found id: ""
	I0930 21:09:26.170818   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.170832   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:26.170866   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:26.170945   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:26.205211   73900 cri.go:89] found id: ""
	I0930 21:09:26.205265   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.205275   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:26.205281   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:26.205335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:26.239242   73900 cri.go:89] found id: ""
	I0930 21:09:26.239276   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.239285   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:26.239291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:26.239337   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:26.272908   73900 cri.go:89] found id: ""
	I0930 21:09:26.272932   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.272940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:26.272946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:26.272993   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:26.311599   73900 cri.go:89] found id: ""
	I0930 21:09:26.311625   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.311632   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:26.311639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:26.311684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:26.345719   73900 cri.go:89] found id: ""
	I0930 21:09:26.345746   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.345754   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:26.345760   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:26.345816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:26.383513   73900 cri.go:89] found id: ""
	I0930 21:09:26.383562   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.383572   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:26.383578   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:26.383637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:26.418533   73900 cri.go:89] found id: ""
	I0930 21:09:26.418565   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.418574   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:26.418584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:26.418594   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.456635   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:26.456660   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:26.507639   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:26.507686   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:26.521069   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:26.521095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:26.594745   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:26.594768   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:26.594781   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:24.569626   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:26.570133   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.069071   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:25.968851   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:27.974091   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:26.808336   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.308328   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.180142   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:29.194730   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:29.194785   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:29.234054   73900 cri.go:89] found id: ""
	I0930 21:09:29.234094   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.234103   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:29.234109   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:29.234156   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:29.280869   73900 cri.go:89] found id: ""
	I0930 21:09:29.280896   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.280907   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:29.280914   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:29.280988   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:29.348376   73900 cri.go:89] found id: ""
	I0930 21:09:29.348406   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.348417   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:29.348424   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:29.348491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:29.404218   73900 cri.go:89] found id: ""
	I0930 21:09:29.404251   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.404261   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:29.404268   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:29.404344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:29.449029   73900 cri.go:89] found id: ""
	I0930 21:09:29.449053   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.449061   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:29.449066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:29.449127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:29.484917   73900 cri.go:89] found id: ""
	I0930 21:09:29.484939   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.484948   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:29.484954   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:29.485002   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:29.517150   73900 cri.go:89] found id: ""
	I0930 21:09:29.517177   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.517185   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:29.517191   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:29.517259   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:29.550410   73900 cri.go:89] found id: ""
	I0930 21:09:29.550443   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.550452   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:29.550461   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:29.550472   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:29.601757   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:29.601803   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:29.616266   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:29.616299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:29.686206   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:29.686228   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:29.686240   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:29.761765   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:29.761810   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:32.299199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:32.315047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:32.315125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:32.349784   73900 cri.go:89] found id: ""
	I0930 21:09:32.349810   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.349819   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:32.349824   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:32.349871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:32.385887   73900 cri.go:89] found id: ""
	I0930 21:09:32.385916   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.385927   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:32.385935   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:32.385994   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:32.421746   73900 cri.go:89] found id: ""
	I0930 21:09:32.421776   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.421789   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:32.421796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:32.421856   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:32.459361   73900 cri.go:89] found id: ""
	I0930 21:09:32.459391   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.459404   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:32.459411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:32.459470   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:32.495919   73900 cri.go:89] found id: ""
	I0930 21:09:32.495947   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.495960   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:32.495966   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:32.496025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:32.533626   73900 cri.go:89] found id: ""
	I0930 21:09:32.533652   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.533663   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:32.533670   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:32.533729   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:32.567577   73900 cri.go:89] found id: ""
	I0930 21:09:32.567610   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.567623   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:32.567630   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:32.567687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:32.604949   73900 cri.go:89] found id: ""
	I0930 21:09:32.604981   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.604991   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:32.605001   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:32.605014   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:32.656781   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:32.656822   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:32.670116   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:32.670144   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:32.736712   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:32.736736   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:32.736751   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:31.070228   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:33.569488   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:30.469162   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:32.469874   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:34.967596   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:31.807682   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:33.807723   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:32.813502   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:32.813556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:35.354372   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:35.369226   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:35.369303   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:35.408374   73900 cri.go:89] found id: ""
	I0930 21:09:35.408402   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.408414   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:35.408421   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:35.408481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:35.442390   73900 cri.go:89] found id: ""
	I0930 21:09:35.442432   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.442440   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:35.442445   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:35.442524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:35.479624   73900 cri.go:89] found id: ""
	I0930 21:09:35.479651   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.479659   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:35.479664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:35.479711   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:35.518580   73900 cri.go:89] found id: ""
	I0930 21:09:35.518609   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.518617   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:35.518623   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:35.518675   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:35.553547   73900 cri.go:89] found id: ""
	I0930 21:09:35.553582   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.553590   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:35.553604   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:35.553669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:35.596444   73900 cri.go:89] found id: ""
	I0930 21:09:35.596476   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.596487   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:35.596495   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:35.596583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:35.634232   73900 cri.go:89] found id: ""
	I0930 21:09:35.634259   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.634268   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:35.634274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:35.634322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:35.669637   73900 cri.go:89] found id: ""
	I0930 21:09:35.669672   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.669683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:35.669694   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:35.669706   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:35.719433   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:35.719469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:35.733383   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:35.733415   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:35.811860   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:35.811887   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:35.811913   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:35.896206   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:35.896272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:35.569694   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:37.570548   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:36.968789   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.968959   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:35.814006   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.306676   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.435999   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:38.450091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:38.450152   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:38.489127   73900 cri.go:89] found id: ""
	I0930 21:09:38.489153   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.489161   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:38.489166   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:38.489221   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:38.520760   73900 cri.go:89] found id: ""
	I0930 21:09:38.520783   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.520792   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:38.520798   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:38.520847   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:38.556279   73900 cri.go:89] found id: ""
	I0930 21:09:38.556306   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.556315   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:38.556319   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:38.556379   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:38.590804   73900 cri.go:89] found id: ""
	I0930 21:09:38.590827   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.590834   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:38.590840   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:38.590906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:38.624765   73900 cri.go:89] found id: ""
	I0930 21:09:38.624792   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.624800   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:38.624805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:38.624857   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:38.660587   73900 cri.go:89] found id: ""
	I0930 21:09:38.660614   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.660625   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:38.660635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:38.660702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:38.693314   73900 cri.go:89] found id: ""
	I0930 21:09:38.693352   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.693362   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:38.693371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:38.693441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:38.729163   73900 cri.go:89] found id: ""
	I0930 21:09:38.729197   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.729212   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:38.729223   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:38.729235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:38.780787   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:38.780828   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:38.794983   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:38.795009   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:38.861886   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:38.861911   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:38.861926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:38.936958   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:38.936994   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:41.479891   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:41.493041   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:41.493106   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:41.528855   73900 cri.go:89] found id: ""
	I0930 21:09:41.528889   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.528900   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:41.528906   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:41.528967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:41.565193   73900 cri.go:89] found id: ""
	I0930 21:09:41.565216   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.565224   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:41.565230   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:41.565289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:41.599503   73900 cri.go:89] found id: ""
	I0930 21:09:41.599538   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.599547   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:41.599553   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:41.599611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:41.636623   73900 cri.go:89] found id: ""
	I0930 21:09:41.636651   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.636663   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:41.636671   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:41.636728   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:41.671727   73900 cri.go:89] found id: ""
	I0930 21:09:41.671753   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.671760   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:41.671765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:41.671819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:41.705499   73900 cri.go:89] found id: ""
	I0930 21:09:41.705533   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.705543   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:41.705549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:41.705602   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:41.738262   73900 cri.go:89] found id: ""
	I0930 21:09:41.738285   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.738292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:41.738297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:41.738351   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:41.774232   73900 cri.go:89] found id: ""
	I0930 21:09:41.774261   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.774269   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:41.774277   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:41.774288   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:41.826060   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:41.826093   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:41.839308   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:41.839335   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:41.908599   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:41.908626   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:41.908640   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:41.986337   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:41.986375   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:40.069900   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:42.070035   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:41.469908   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:43.968111   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:40.307200   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:42.308356   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:44.807663   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:44.527015   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:44.539973   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:44.540036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:44.575985   73900 cri.go:89] found id: ""
	I0930 21:09:44.576012   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.576021   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:44.576027   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:44.576076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:44.612693   73900 cri.go:89] found id: ""
	I0930 21:09:44.612724   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.612736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:44.612743   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:44.612809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:44.646515   73900 cri.go:89] found id: ""
	I0930 21:09:44.646544   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.646555   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:44.646562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:44.646623   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:44.679980   73900 cri.go:89] found id: ""
	I0930 21:09:44.680011   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.680022   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:44.680030   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:44.680089   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:44.714078   73900 cri.go:89] found id: ""
	I0930 21:09:44.714117   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.714128   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:44.714135   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:44.714193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:44.748491   73900 cri.go:89] found id: ""
	I0930 21:09:44.748521   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.748531   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:44.748539   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:44.748618   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:44.780902   73900 cri.go:89] found id: ""
	I0930 21:09:44.780936   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.780947   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:44.780955   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:44.781013   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:44.817944   73900 cri.go:89] found id: ""
	I0930 21:09:44.817999   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.818011   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:44.818022   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:44.818038   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:44.873896   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:44.873926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:44.887829   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:44.887858   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:44.957562   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:44.957584   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:44.957598   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:45.037892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:45.037934   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:47.583013   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:47.595799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:47.595870   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:47.630348   73900 cri.go:89] found id: ""
	I0930 21:09:47.630377   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.630385   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:47.630391   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:47.630444   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:47.663416   73900 cri.go:89] found id: ""
	I0930 21:09:47.663440   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.663448   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:47.663454   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:47.663500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:47.700145   73900 cri.go:89] found id: ""
	I0930 21:09:47.700174   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.700184   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:47.700192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:47.700253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:47.732539   73900 cri.go:89] found id: ""
	I0930 21:09:47.732567   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.732577   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:47.732583   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:47.732637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:44.569951   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:46.570501   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:48.574018   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:45.971063   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:48.468661   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:47.307709   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:49.806843   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:47.764470   73900 cri.go:89] found id: ""
	I0930 21:09:47.764493   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.764501   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:47.764507   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:47.764553   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:47.802365   73900 cri.go:89] found id: ""
	I0930 21:09:47.802393   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.802403   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:47.802411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:47.802468   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:47.836504   73900 cri.go:89] found id: ""
	I0930 21:09:47.836531   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.836542   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:47.836549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:47.836611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:47.870315   73900 cri.go:89] found id: ""
	I0930 21:09:47.870338   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.870351   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:47.870359   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:47.870370   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:47.919974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:47.920011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:47.934157   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:47.934190   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:48.003046   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:48.003072   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:48.003085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:48.084947   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:48.084985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:50.624791   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:50.638118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:50.638196   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:50.672448   73900 cri.go:89] found id: ""
	I0930 21:09:50.672479   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.672488   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:50.672503   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:50.672557   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:50.706057   73900 cri.go:89] found id: ""
	I0930 21:09:50.706080   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.706088   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:50.706093   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:50.706142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:50.738101   73900 cri.go:89] found id: ""
	I0930 21:09:50.738126   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.738134   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:50.738140   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:50.738207   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:50.772483   73900 cri.go:89] found id: ""
	I0930 21:09:50.772508   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.772516   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:50.772522   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:50.772581   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:50.805169   73900 cri.go:89] found id: ""
	I0930 21:09:50.805200   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.805211   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:50.805220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:50.805276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:50.842144   73900 cri.go:89] found id: ""
	I0930 21:09:50.842168   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.842176   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:50.842182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:50.842236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:50.875512   73900 cri.go:89] found id: ""
	I0930 21:09:50.875563   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.875575   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:50.875582   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:50.875643   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:50.909549   73900 cri.go:89] found id: ""
	I0930 21:09:50.909580   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.909591   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:50.909599   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:50.909610   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:50.962064   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:50.962098   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:50.976979   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:50.977012   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:51.053784   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:51.053815   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:51.053833   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:51.130939   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:51.130975   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:51.069919   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:53.568708   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:50.468737   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:52.968935   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:52.306733   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:54.306875   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:53.667675   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:53.680381   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:53.680449   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:53.712759   73900 cri.go:89] found id: ""
	I0930 21:09:53.712791   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.712800   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:53.712807   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:53.712871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:53.748958   73900 cri.go:89] found id: ""
	I0930 21:09:53.748990   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.749002   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:53.749009   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:53.749078   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:53.783243   73900 cri.go:89] found id: ""
	I0930 21:09:53.783272   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.783282   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:53.783289   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:53.783382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:53.823848   73900 cri.go:89] found id: ""
	I0930 21:09:53.823875   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.823883   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:53.823890   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:53.823941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:53.865607   73900 cri.go:89] found id: ""
	I0930 21:09:53.865635   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.865643   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:53.865648   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:53.865693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:53.900888   73900 cri.go:89] found id: ""
	I0930 21:09:53.900912   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.900920   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:53.900926   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:53.900985   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:53.933688   73900 cri.go:89] found id: ""
	I0930 21:09:53.933717   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.933728   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:53.933736   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:53.933798   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:53.968702   73900 cri.go:89] found id: ""
	I0930 21:09:53.968731   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.968740   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:53.968749   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:53.968760   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:54.021588   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:54.021626   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:54.036681   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:54.036719   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:54.112189   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:54.112209   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:54.112223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:54.185028   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:54.185085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:56.725146   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:56.739358   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:56.739421   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:56.779278   73900 cri.go:89] found id: ""
	I0930 21:09:56.779313   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.779322   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:56.779329   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:56.779377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:56.815972   73900 cri.go:89] found id: ""
	I0930 21:09:56.816000   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.816011   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:56.816018   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:56.816084   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:56.849425   73900 cri.go:89] found id: ""
	I0930 21:09:56.849458   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.849471   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:56.849478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:56.849542   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:56.885483   73900 cri.go:89] found id: ""
	I0930 21:09:56.885510   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.885520   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:56.885527   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:56.885586   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:56.917832   73900 cri.go:89] found id: ""
	I0930 21:09:56.917862   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.917872   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:56.917879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:56.917932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:56.951613   73900 cri.go:89] found id: ""
	I0930 21:09:56.951643   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.951654   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:56.951664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:56.951726   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:56.987577   73900 cri.go:89] found id: ""
	I0930 21:09:56.987608   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.987620   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:56.987628   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:56.987691   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:57.024871   73900 cri.go:89] found id: ""
	I0930 21:09:57.024903   73900 logs.go:276] 0 containers: []
	W0930 21:09:57.024912   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:57.024920   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:57.024935   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:57.038279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:57.038309   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:57.111955   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:57.111985   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:57.111998   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:57.193719   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:57.193755   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:57.230058   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:57.230085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:55.568928   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:58.069462   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:55.467583   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:57.968380   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:59.969131   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:56.807753   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:58.808055   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:59.780762   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:59.794210   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:59.794277   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:59.828258   73900 cri.go:89] found id: ""
	I0930 21:09:59.828287   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.828298   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:59.828306   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:59.828369   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:59.868295   73900 cri.go:89] found id: ""
	I0930 21:09:59.868331   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.868353   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:59.868363   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:59.868437   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:59.900298   73900 cri.go:89] found id: ""
	I0930 21:09:59.900326   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.900337   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:59.900343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:59.900403   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:59.934081   73900 cri.go:89] found id: ""
	I0930 21:09:59.934108   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.934120   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:59.934127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:59.934183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:59.970564   73900 cri.go:89] found id: ""
	I0930 21:09:59.970592   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.970600   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:59.970605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:59.970652   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:00.006215   73900 cri.go:89] found id: ""
	I0930 21:10:00.006249   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.006259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:00.006270   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:00.006348   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:00.040106   73900 cri.go:89] found id: ""
	I0930 21:10:00.040135   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.040144   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:00.040150   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:00.040202   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:00.079310   73900 cri.go:89] found id: ""
	I0930 21:10:00.079345   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.079354   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:00.079365   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:00.079378   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:00.161243   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:00.161284   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:00.198911   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:00.198941   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:00.247697   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:00.247735   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:00.260905   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:00.260933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:00.332502   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:00.569218   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.569371   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.468439   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:04.968585   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:00.808753   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:03.306574   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.833204   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:02.846807   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:02.846893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:02.882386   73900 cri.go:89] found id: ""
	I0930 21:10:02.882420   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.882431   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:02.882439   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:02.882504   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:02.918589   73900 cri.go:89] found id: ""
	I0930 21:10:02.918617   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.918633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:02.918642   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:02.918722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:02.952758   73900 cri.go:89] found id: ""
	I0930 21:10:02.952789   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.952799   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:02.952806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:02.952871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:02.991406   73900 cri.go:89] found id: ""
	I0930 21:10:02.991439   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.991448   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:02.991454   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:02.991511   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:03.030075   73900 cri.go:89] found id: ""
	I0930 21:10:03.030104   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.030112   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:03.030121   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:03.030172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:03.063630   73900 cri.go:89] found id: ""
	I0930 21:10:03.063654   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.063662   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:03.063668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:03.063718   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:03.098607   73900 cri.go:89] found id: ""
	I0930 21:10:03.098636   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.098644   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:03.098649   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:03.098702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:03.133161   73900 cri.go:89] found id: ""
	I0930 21:10:03.133189   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.133198   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:03.133206   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:03.133217   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:03.211046   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:03.211083   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:03.252585   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:03.252615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:03.307019   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:03.307049   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:03.320781   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:03.320811   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:03.408645   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:05.909638   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:05.922674   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:05.922744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:05.955264   73900 cri.go:89] found id: ""
	I0930 21:10:05.955305   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.955318   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:05.955326   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:05.955378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:05.991055   73900 cri.go:89] found id: ""
	I0930 21:10:05.991100   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.991122   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:05.991130   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:05.991194   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:06.025725   73900 cri.go:89] found id: ""
	I0930 21:10:06.025755   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.025766   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:06.025773   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:06.025832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:06.067700   73900 cri.go:89] found id: ""
	I0930 21:10:06.067726   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.067736   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:06.067743   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:06.067801   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:06.102729   73900 cri.go:89] found id: ""
	I0930 21:10:06.102760   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.102771   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:06.102784   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:06.102845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:06.137120   73900 cri.go:89] found id: ""
	I0930 21:10:06.137148   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.137159   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:06.137164   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:06.137215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:06.169985   73900 cri.go:89] found id: ""
	I0930 21:10:06.170014   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.170023   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:06.170029   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:06.170082   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:06.206928   73900 cri.go:89] found id: ""
	I0930 21:10:06.206951   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.206959   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:06.206967   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:06.206977   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:06.258835   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:06.258870   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:06.273527   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:06.273556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:06.351335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:06.351359   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:06.351373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:06.423412   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:06.423450   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:04.569756   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:07.069437   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:09.074024   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:06.969500   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:09.471298   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:05.807932   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:08.306749   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:08.968986   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:08.984075   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:08.984139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:09.016815   73900 cri.go:89] found id: ""
	I0930 21:10:09.016847   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.016858   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:09.016864   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:09.016928   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:09.051603   73900 cri.go:89] found id: ""
	I0930 21:10:09.051626   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.051633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:09.051639   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:09.051693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:09.088820   73900 cri.go:89] found id: ""
	I0930 21:10:09.088856   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.088870   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:09.088884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:09.088949   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:09.124032   73900 cri.go:89] found id: ""
	I0930 21:10:09.124064   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.124076   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:09.124083   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:09.124140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:09.177129   73900 cri.go:89] found id: ""
	I0930 21:10:09.177161   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.177172   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:09.177178   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:09.177228   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:09.211490   73900 cri.go:89] found id: ""
	I0930 21:10:09.211513   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.211521   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:09.211540   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:09.211605   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:09.252187   73900 cri.go:89] found id: ""
	I0930 21:10:09.252211   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.252221   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:09.252229   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:09.252289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:09.286970   73900 cri.go:89] found id: ""
	I0930 21:10:09.287004   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.287012   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:09.287020   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:09.287031   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:09.369387   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:09.369410   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:09.369422   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:09.450685   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:09.450733   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:09.491302   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:09.491331   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:09.540183   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:09.540219   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.054793   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:12.068635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:12.068717   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:12.103118   73900 cri.go:89] found id: ""
	I0930 21:10:12.103140   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.103149   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:12.103154   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:12.103219   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:12.137992   73900 cri.go:89] found id: ""
	I0930 21:10:12.138020   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.138031   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:12.138040   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:12.138103   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:12.175559   73900 cri.go:89] found id: ""
	I0930 21:10:12.175591   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.175609   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:12.175616   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:12.175678   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:12.209630   73900 cri.go:89] found id: ""
	I0930 21:10:12.209655   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.209666   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:12.209672   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:12.209735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:12.245844   73900 cri.go:89] found id: ""
	I0930 21:10:12.245879   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.245891   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:12.245901   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:12.245961   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:12.280385   73900 cri.go:89] found id: ""
	I0930 21:10:12.280412   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.280420   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:12.280426   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:12.280484   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:12.315424   73900 cri.go:89] found id: ""
	I0930 21:10:12.315453   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.315463   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:12.315473   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:12.315566   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:12.349223   73900 cri.go:89] found id: ""
	I0930 21:10:12.349251   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.349270   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:12.349279   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:12.349291   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.362360   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:12.362397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:12.432060   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:12.432084   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:12.432101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:12.506059   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:12.506096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:12.541319   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:12.541348   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:11.568740   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:13.569690   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:11.968234   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:13.968634   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:10.306903   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:12.307072   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:14.807562   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:15.098852   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:15.111919   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:15.112001   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:15.149174   73900 cri.go:89] found id: ""
	I0930 21:10:15.149206   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.149216   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:15.149223   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:15.149286   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:15.187283   73900 cri.go:89] found id: ""
	I0930 21:10:15.187316   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.187326   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:15.187333   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:15.187392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:15.223896   73900 cri.go:89] found id: ""
	I0930 21:10:15.223922   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.223933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:15.223940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:15.224000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:15.260530   73900 cri.go:89] found id: ""
	I0930 21:10:15.260559   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.260567   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:15.260573   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:15.260634   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:15.296319   73900 cri.go:89] found id: ""
	I0930 21:10:15.296346   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.296357   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:15.296363   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:15.296425   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:15.333785   73900 cri.go:89] found id: ""
	I0930 21:10:15.333830   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.333843   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:15.333856   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:15.333932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:15.368235   73900 cri.go:89] found id: ""
	I0930 21:10:15.368268   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.368280   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:15.368288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:15.368354   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:15.408155   73900 cri.go:89] found id: ""
	I0930 21:10:15.408184   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.408192   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:15.408200   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:15.408210   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:15.462018   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:15.462058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:15.477345   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:15.477376   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:15.558398   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:15.558423   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:15.558442   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:15.662269   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:15.662311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:15.569988   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.069056   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:16.467859   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.468764   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:17.307469   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:19.809316   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.199477   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:18.213235   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:18.213320   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:18.250379   73900 cri.go:89] found id: ""
	I0930 21:10:18.250409   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.250418   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:18.250424   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:18.250515   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:18.283381   73900 cri.go:89] found id: ""
	I0930 21:10:18.283407   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.283416   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:18.283422   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:18.283482   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:18.321601   73900 cri.go:89] found id: ""
	I0930 21:10:18.321635   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.321646   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:18.321659   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:18.321720   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:18.354210   73900 cri.go:89] found id: ""
	I0930 21:10:18.354242   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.354254   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:18.354262   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:18.354330   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:18.391982   73900 cri.go:89] found id: ""
	I0930 21:10:18.392019   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.392029   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:18.392035   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:18.392150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:18.428826   73900 cri.go:89] found id: ""
	I0930 21:10:18.428851   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.428862   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:18.428870   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:18.428927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:18.465841   73900 cri.go:89] found id: ""
	I0930 21:10:18.465868   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.465878   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:18.465887   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:18.465934   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:18.502747   73900 cri.go:89] found id: ""
	I0930 21:10:18.502775   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.502783   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:18.502793   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:18.502807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:18.558025   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:18.558064   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:18.572356   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:18.572383   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:18.642994   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:18.643020   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:18.643033   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:18.722804   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:18.722845   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.262790   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:21.276427   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:21.276510   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:21.323245   73900 cri.go:89] found id: ""
	I0930 21:10:21.323274   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.323284   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:21.323291   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:21.323377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:21.381684   73900 cri.go:89] found id: ""
	I0930 21:10:21.381725   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.381736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:21.381744   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:21.381813   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:21.428818   73900 cri.go:89] found id: ""
	I0930 21:10:21.428841   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.428849   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:21.428854   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:21.428901   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:21.462906   73900 cri.go:89] found id: ""
	I0930 21:10:21.462935   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.462944   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:21.462949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:21.462995   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:21.502417   73900 cri.go:89] found id: ""
	I0930 21:10:21.502452   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.502464   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:21.502471   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:21.502535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:21.540004   73900 cri.go:89] found id: ""
	I0930 21:10:21.540037   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.540048   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:21.540056   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:21.540105   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:21.574898   73900 cri.go:89] found id: ""
	I0930 21:10:21.574929   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.574937   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:21.574942   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:21.574999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:21.609438   73900 cri.go:89] found id: ""
	I0930 21:10:21.609465   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.609473   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:21.609496   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:21.609524   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.646651   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:21.646679   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:21.702406   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:21.702451   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:21.716226   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:21.716260   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:21.790089   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:21.790115   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:21.790128   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:20.070823   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.568856   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:20.968069   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.968208   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.307376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:24.808780   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:24.368291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:24.381517   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:24.381588   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:24.416535   73900 cri.go:89] found id: ""
	I0930 21:10:24.416559   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.416570   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:24.416577   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:24.416635   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:24.454444   73900 cri.go:89] found id: ""
	I0930 21:10:24.454472   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.454480   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:24.454485   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:24.454537   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:24.492334   73900 cri.go:89] found id: ""
	I0930 21:10:24.492359   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.492367   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:24.492373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:24.492419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:24.527590   73900 cri.go:89] found id: ""
	I0930 21:10:24.527622   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.527633   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:24.527642   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:24.527708   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:24.564819   73900 cri.go:89] found id: ""
	I0930 21:10:24.564844   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.564853   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:24.564858   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:24.564915   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:24.599367   73900 cri.go:89] found id: ""
	I0930 21:10:24.599390   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.599398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:24.599403   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:24.599450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:24.636738   73900 cri.go:89] found id: ""
	I0930 21:10:24.636767   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.636778   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:24.636785   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:24.636845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:24.669607   73900 cri.go:89] found id: ""
	I0930 21:10:24.669640   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.669651   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:24.669663   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:24.669680   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:24.722662   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:24.722696   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:24.736150   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:24.736179   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:24.812022   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:24.812053   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:24.812069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.891291   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:24.891330   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.430595   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:27.443990   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:27.444054   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:27.480204   73900 cri.go:89] found id: ""
	I0930 21:10:27.480230   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.480237   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:27.480243   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:27.480297   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:27.516959   73900 cri.go:89] found id: ""
	I0930 21:10:27.516982   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.516989   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:27.516995   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:27.517041   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:27.549717   73900 cri.go:89] found id: ""
	I0930 21:10:27.549745   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.549758   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:27.549769   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:27.549821   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:27.584512   73900 cri.go:89] found id: ""
	I0930 21:10:27.584539   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.584549   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:27.584560   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:27.584619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:27.623551   73900 cri.go:89] found id: ""
	I0930 21:10:27.623586   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.623603   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:27.623612   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:27.623679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:27.662453   73900 cri.go:89] found id: ""
	I0930 21:10:27.662478   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.662486   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:27.662493   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:27.662554   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:27.695665   73900 cri.go:89] found id: ""
	I0930 21:10:27.695693   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.695701   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:27.695707   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:27.695765   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:27.729090   73900 cri.go:89] found id: ""
	I0930 21:10:27.729129   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.729137   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:27.729146   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:27.729155   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.570129   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:26.572751   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.069340   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:25.468598   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.469443   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.970417   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.307766   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.806538   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.816186   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:27.816230   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.854451   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:27.854485   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:27.905674   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:27.905709   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:27.918889   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:27.918917   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:27.989739   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.490514   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:30.502735   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:30.502810   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:30.535874   73900 cri.go:89] found id: ""
	I0930 21:10:30.535902   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.535914   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:30.535922   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:30.535989   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:30.570603   73900 cri.go:89] found id: ""
	I0930 21:10:30.570627   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.570634   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:30.570643   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:30.570689   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:30.605225   73900 cri.go:89] found id: ""
	I0930 21:10:30.605255   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.605266   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:30.605273   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:30.605333   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:30.640810   73900 cri.go:89] found id: ""
	I0930 21:10:30.640839   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.640849   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:30.640857   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:30.640914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:30.673101   73900 cri.go:89] found id: ""
	I0930 21:10:30.673129   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.673137   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:30.673142   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:30.673189   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:30.704332   73900 cri.go:89] found id: ""
	I0930 21:10:30.704356   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.704366   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:30.704373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:30.704440   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:30.738463   73900 cri.go:89] found id: ""
	I0930 21:10:30.738494   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.738506   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:30.738516   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:30.738579   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:30.772115   73900 cri.go:89] found id: ""
	I0930 21:10:30.772153   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.772164   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:30.772175   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:30.772193   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:30.850683   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.850707   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:30.850720   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:30.930674   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:30.930718   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:30.975781   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:30.975819   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:31.030566   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:31.030613   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:31.070216   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.568935   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:32.468224   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:34.968557   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:31.807408   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.807669   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.544354   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:33.557613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:33.557692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:33.594372   73900 cri.go:89] found id: ""
	I0930 21:10:33.594394   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.594401   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:33.594406   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:33.594455   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:33.632026   73900 cri.go:89] found id: ""
	I0930 21:10:33.632048   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.632056   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:33.632061   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:33.632113   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:33.666168   73900 cri.go:89] found id: ""
	I0930 21:10:33.666201   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.666213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:33.666219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:33.666269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:33.697772   73900 cri.go:89] found id: ""
	I0930 21:10:33.697801   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.697810   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:33.697816   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:33.697864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:33.732821   73900 cri.go:89] found id: ""
	I0930 21:10:33.732851   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.732862   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:33.732869   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:33.732952   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:33.770646   73900 cri.go:89] found id: ""
	I0930 21:10:33.770682   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.770693   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:33.770701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:33.770756   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:33.804803   73900 cri.go:89] found id: ""
	I0930 21:10:33.804831   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.804842   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:33.804848   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:33.804921   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:33.838455   73900 cri.go:89] found id: ""
	I0930 21:10:33.838484   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.838495   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:33.838505   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:33.838523   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:33.879785   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:33.879812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:33.934586   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:33.934623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:33.948250   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:33.948293   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:34.023021   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:34.023054   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:34.023069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:36.604173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:36.616668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:36.616735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:36.650716   73900 cri.go:89] found id: ""
	I0930 21:10:36.650748   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.650757   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:36.650767   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:36.650833   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:36.685705   73900 cri.go:89] found id: ""
	I0930 21:10:36.685739   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.685751   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:36.685758   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:36.685819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:36.719895   73900 cri.go:89] found id: ""
	I0930 21:10:36.719922   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.719932   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:36.719939   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:36.720006   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:36.753123   73900 cri.go:89] found id: ""
	I0930 21:10:36.753148   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.753159   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:36.753166   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:36.753231   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:36.790023   73900 cri.go:89] found id: ""
	I0930 21:10:36.790054   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.790066   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:36.790073   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:36.790135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:36.825280   73900 cri.go:89] found id: ""
	I0930 21:10:36.825314   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.825324   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:36.825343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:36.825411   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:36.859028   73900 cri.go:89] found id: ""
	I0930 21:10:36.859053   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.859060   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:36.859066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:36.859125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:36.894952   73900 cri.go:89] found id: ""
	I0930 21:10:36.894980   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.894988   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:36.894996   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:36.895010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:36.968214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:36.968241   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:36.968256   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:37.047866   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:37.047903   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:37.088671   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:37.088705   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:37.144014   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:37.144058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:36.068920   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:38.069544   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:36.969475   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:39.469207   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:35.808654   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:38.306701   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:39.657874   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:39.671042   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:39.671100   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:39.706210   73900 cri.go:89] found id: ""
	I0930 21:10:39.706235   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.706243   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:39.706248   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:39.706295   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:39.743194   73900 cri.go:89] found id: ""
	I0930 21:10:39.743218   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.743226   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:39.743232   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:39.743280   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:39.780681   73900 cri.go:89] found id: ""
	I0930 21:10:39.780707   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.780715   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:39.780720   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:39.780774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:39.815841   73900 cri.go:89] found id: ""
	I0930 21:10:39.815865   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.815874   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:39.815879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:39.815933   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:39.849497   73900 cri.go:89] found id: ""
	I0930 21:10:39.849523   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.849534   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:39.849541   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:39.849603   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:39.883476   73900 cri.go:89] found id: ""
	I0930 21:10:39.883507   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.883519   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:39.883562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:39.883633   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:39.918300   73900 cri.go:89] found id: ""
	I0930 21:10:39.918329   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.918338   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:39.918343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:39.918392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:39.955751   73900 cri.go:89] found id: ""
	I0930 21:10:39.955780   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.955788   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:39.955795   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:39.955807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:40.010994   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:40.011035   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:40.025992   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:40.026022   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:40.097709   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:40.097731   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:40.097748   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:40.176790   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:40.176824   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:42.713838   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:42.729806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:42.729885   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:40.070503   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.568444   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:41.968357   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:44.469223   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:40.308072   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.807489   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.765449   73900 cri.go:89] found id: ""
	I0930 21:10:42.765483   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.765491   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:42.765498   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:42.765555   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:42.802556   73900 cri.go:89] found id: ""
	I0930 21:10:42.802584   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.802604   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:42.802612   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:42.802693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:42.836537   73900 cri.go:89] found id: ""
	I0930 21:10:42.836568   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.836585   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:42.836598   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:42.836662   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:42.870475   73900 cri.go:89] found id: ""
	I0930 21:10:42.870503   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.870511   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:42.870526   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:42.870589   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:42.907061   73900 cri.go:89] found id: ""
	I0930 21:10:42.907090   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.907098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:42.907103   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:42.907153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:42.941607   73900 cri.go:89] found id: ""
	I0930 21:10:42.941632   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.941640   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:42.941646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:42.941701   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:42.977073   73900 cri.go:89] found id: ""
	I0930 21:10:42.977097   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.977105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:42.977111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:42.977159   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:43.010838   73900 cri.go:89] found id: ""
	I0930 21:10:43.010859   73900 logs.go:276] 0 containers: []
	W0930 21:10:43.010867   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:43.010875   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:43.010886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:43.061264   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:43.061299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:43.075917   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:43.075950   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:43.137088   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:43.137111   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:43.137126   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:43.219393   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:43.219440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:45.761752   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:45.775864   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:45.775942   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:45.810693   73900 cri.go:89] found id: ""
	I0930 21:10:45.810724   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.810734   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:45.810740   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:45.810797   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:45.848360   73900 cri.go:89] found id: ""
	I0930 21:10:45.848399   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.848410   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:45.848418   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:45.848475   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:45.885504   73900 cri.go:89] found id: ""
	I0930 21:10:45.885550   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.885560   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:45.885565   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:45.885616   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:45.919747   73900 cri.go:89] found id: ""
	I0930 21:10:45.919776   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.919784   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:45.919789   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:45.919843   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:45.953787   73900 cri.go:89] found id: ""
	I0930 21:10:45.953820   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.953831   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:45.953839   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:45.953893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:45.990145   73900 cri.go:89] found id: ""
	I0930 21:10:45.990174   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.990184   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:45.990192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:45.990253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:46.023359   73900 cri.go:89] found id: ""
	I0930 21:10:46.023383   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.023391   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:46.023396   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:46.023447   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:46.057460   73900 cri.go:89] found id: ""
	I0930 21:10:46.057493   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.057504   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:46.057514   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:46.057533   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:46.097082   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:46.097109   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:46.147921   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:46.147960   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:46.161204   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:46.161232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:46.224308   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:46.224336   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:46.224351   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:44.568918   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:46.569353   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.569656   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:46.967674   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.967998   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:45.306917   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:47.806333   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:49.807846   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.805668   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:48.818569   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:48.818663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:48.856783   73900 cri.go:89] found id: ""
	I0930 21:10:48.856815   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.856827   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:48.856834   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:48.856896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:48.889185   73900 cri.go:89] found id: ""
	I0930 21:10:48.889217   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.889229   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:48.889236   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:48.889306   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:48.922013   73900 cri.go:89] found id: ""
	I0930 21:10:48.922041   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.922050   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:48.922055   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:48.922107   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:48.956818   73900 cri.go:89] found id: ""
	I0930 21:10:48.956848   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.956858   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:48.956866   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:48.956929   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:48.994942   73900 cri.go:89] found id: ""
	I0930 21:10:48.994975   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.994985   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:48.994991   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:48.995052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:49.031448   73900 cri.go:89] found id: ""
	I0930 21:10:49.031479   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.031491   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:49.031500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:49.031583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:49.066570   73900 cri.go:89] found id: ""
	I0930 21:10:49.066600   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.066608   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:49.066613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:49.066658   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:49.100952   73900 cri.go:89] found id: ""
	I0930 21:10:49.100981   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.100992   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:49.101000   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:49.101010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:49.176423   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:49.176458   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:49.212358   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:49.212387   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:49.263177   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:49.263227   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:49.275940   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:49.275969   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:49.346915   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:51.847761   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:51.860571   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:51.860646   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:51.894863   73900 cri.go:89] found id: ""
	I0930 21:10:51.894896   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.894906   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:51.894914   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:51.894978   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:51.927977   73900 cri.go:89] found id: ""
	I0930 21:10:51.928007   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.928018   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:51.928025   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:51.928083   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:51.962894   73900 cri.go:89] found id: ""
	I0930 21:10:51.962924   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.962933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:51.962940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:51.962999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:51.998453   73900 cri.go:89] found id: ""
	I0930 21:10:51.998482   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.998493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:51.998500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:51.998562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:52.033039   73900 cri.go:89] found id: ""
	I0930 21:10:52.033066   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.033075   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:52.033080   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:52.033139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:52.067222   73900 cri.go:89] found id: ""
	I0930 21:10:52.067254   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.067267   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:52.067274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:52.067341   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:52.102414   73900 cri.go:89] found id: ""
	I0930 21:10:52.102439   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.102448   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:52.102453   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:52.102498   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:52.135175   73900 cri.go:89] found id: ""
	I0930 21:10:52.135204   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.135214   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:52.135225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:52.135239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:52.185736   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:52.185779   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:52.198756   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:52.198792   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:52.264816   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:52.264847   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:52.264859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:52.347189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:52.347229   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:50.569765   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:53.068745   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:50.968885   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:52.970855   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:52.307245   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:54.308516   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:54.887502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:54.900067   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:54.900153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:54.939214   73900 cri.go:89] found id: ""
	I0930 21:10:54.939241   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.939249   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:54.939259   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:54.939313   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:54.973451   73900 cri.go:89] found id: ""
	I0930 21:10:54.973475   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.973483   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:54.973488   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:54.973541   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:55.007815   73900 cri.go:89] found id: ""
	I0930 21:10:55.007841   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.007850   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:55.007855   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:55.007914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:55.040861   73900 cri.go:89] found id: ""
	I0930 21:10:55.040891   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.040899   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:55.040905   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:55.040957   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:55.076053   73900 cri.go:89] found id: ""
	I0930 21:10:55.076086   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.076098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:55.076111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:55.076172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:55.108768   73900 cri.go:89] found id: ""
	I0930 21:10:55.108797   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.108807   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:55.108814   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:55.108879   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:55.155283   73900 cri.go:89] found id: ""
	I0930 21:10:55.155316   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.155331   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:55.155338   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:55.155398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:55.189370   73900 cri.go:89] found id: ""
	I0930 21:10:55.189399   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.189408   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:55.189416   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:55.189432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:55.243067   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:55.243101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:55.257021   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:55.257051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:55.329381   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:55.329408   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:55.329423   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:55.405691   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:55.405762   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:55.069901   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.568914   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:55.468489   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.977733   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:56.806381   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:58.806880   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.957380   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:57.971160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:57.971245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:58.004401   73900 cri.go:89] found id: ""
	I0930 21:10:58.004446   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.004457   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:58.004465   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:58.004524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:58.038954   73900 cri.go:89] found id: ""
	I0930 21:10:58.038978   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.038986   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:58.038991   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:58.039036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:58.072801   73900 cri.go:89] found id: ""
	I0930 21:10:58.072830   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.072842   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:58.072849   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:58.072909   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:58.104908   73900 cri.go:89] found id: ""
	I0930 21:10:58.104936   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.104946   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:58.104953   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:58.105014   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:58.139693   73900 cri.go:89] found id: ""
	I0930 21:10:58.139725   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.139735   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:58.139741   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:58.139795   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:58.174149   73900 cri.go:89] found id: ""
	I0930 21:10:58.174180   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.174192   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:58.174199   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:58.174275   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:58.206067   73900 cri.go:89] found id: ""
	I0930 21:10:58.206094   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.206105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:58.206112   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:58.206167   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:58.240613   73900 cri.go:89] found id: ""
	I0930 21:10:58.240645   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.240653   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:58.240661   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:58.240674   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:58.306061   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:58.306086   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:58.306100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:58.386030   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:58.386073   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:58.425526   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:58.425562   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:58.483364   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:58.483409   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:00.998086   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:01.011934   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:01.012015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:01.047923   73900 cri.go:89] found id: ""
	I0930 21:11:01.047951   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.047960   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:01.047966   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:01.048024   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:01.082126   73900 cri.go:89] found id: ""
	I0930 21:11:01.082159   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.082170   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:01.082176   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:01.082224   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:01.117746   73900 cri.go:89] found id: ""
	I0930 21:11:01.117775   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.117787   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:01.117794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:01.117853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:01.153034   73900 cri.go:89] found id: ""
	I0930 21:11:01.153059   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.153067   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:01.153072   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:01.153128   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:01.188102   73900 cri.go:89] found id: ""
	I0930 21:11:01.188125   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.188133   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:01.188139   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:01.188193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:01.222120   73900 cri.go:89] found id: ""
	I0930 21:11:01.222147   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.222155   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:01.222161   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:01.222215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:01.258899   73900 cri.go:89] found id: ""
	I0930 21:11:01.258929   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.258941   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:01.258949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:01.259008   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:01.295473   73900 cri.go:89] found id: ""
	I0930 21:11:01.295504   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.295512   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:01.295521   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:01.295551   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:01.349134   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:01.349181   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:01.363113   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:01.363147   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:01.436589   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:01.436609   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:01.436622   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:01.516384   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:01.516420   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:00.069406   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:02.568203   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:00.468104   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:02.968911   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:00.807318   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:03.307184   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:04.075114   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:04.089300   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:04.089375   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:04.124385   73900 cri.go:89] found id: ""
	I0930 21:11:04.124411   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.124419   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:04.124425   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:04.124491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:04.158326   73900 cri.go:89] found id: ""
	I0930 21:11:04.158359   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.158367   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:04.158372   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:04.158419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:04.193477   73900 cri.go:89] found id: ""
	I0930 21:11:04.193507   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.193516   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:04.193521   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:04.193577   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:04.231697   73900 cri.go:89] found id: ""
	I0930 21:11:04.231723   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.231731   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:04.231737   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:04.231805   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:04.265879   73900 cri.go:89] found id: ""
	I0930 21:11:04.265903   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.265910   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:04.265915   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:04.265960   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:04.301382   73900 cri.go:89] found id: ""
	I0930 21:11:04.301421   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.301432   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:04.301440   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:04.301505   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:04.337496   73900 cri.go:89] found id: ""
	I0930 21:11:04.337521   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.337529   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:04.337534   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:04.337584   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:04.372631   73900 cri.go:89] found id: ""
	I0930 21:11:04.372665   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.372677   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:04.372700   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:04.372715   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:04.385279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:04.385311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:04.456700   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:04.456721   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:04.456732   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:04.537892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:04.537933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.574919   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:04.574947   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.128733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:07.142625   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:07.142687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:07.177450   73900 cri.go:89] found id: ""
	I0930 21:11:07.177475   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.177483   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:07.177488   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:07.177536   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:07.210158   73900 cri.go:89] found id: ""
	I0930 21:11:07.210184   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.210192   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:07.210197   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:07.210256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:07.242623   73900 cri.go:89] found id: ""
	I0930 21:11:07.242648   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.242656   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:07.242661   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:07.242705   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:07.277779   73900 cri.go:89] found id: ""
	I0930 21:11:07.277810   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.277821   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:07.277827   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:07.277881   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:07.316232   73900 cri.go:89] found id: ""
	I0930 21:11:07.316257   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.316263   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:07.316269   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:07.316326   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:07.360277   73900 cri.go:89] found id: ""
	I0930 21:11:07.360311   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.360322   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:07.360329   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:07.360391   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:07.412146   73900 cri.go:89] found id: ""
	I0930 21:11:07.412171   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.412181   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:07.412187   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:07.412247   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:07.447179   73900 cri.go:89] found id: ""
	I0930 21:11:07.447209   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.447217   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:07.447225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:07.447235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.496304   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:07.496340   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:07.510332   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:07.510373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:07.581335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:07.581375   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:07.581393   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:07.664522   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:07.664558   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.568787   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.069201   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:09.070583   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:05.468251   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.970913   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:05.308084   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.807712   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.201145   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:10.213605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:10.213663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:10.247875   73900 cri.go:89] found id: ""
	I0930 21:11:10.247904   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.247913   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:10.247918   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:10.247966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:10.280855   73900 cri.go:89] found id: ""
	I0930 21:11:10.280889   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.280900   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:10.280907   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:10.280967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:10.315638   73900 cri.go:89] found id: ""
	I0930 21:11:10.315661   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.315669   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:10.315675   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:10.315722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:10.357059   73900 cri.go:89] found id: ""
	I0930 21:11:10.357086   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.357094   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:10.357100   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:10.357154   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:10.389969   73900 cri.go:89] found id: ""
	I0930 21:11:10.389997   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.390004   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:10.390009   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:10.390060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:10.424424   73900 cri.go:89] found id: ""
	I0930 21:11:10.424454   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.424463   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:10.424469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:10.424533   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:10.457608   73900 cri.go:89] found id: ""
	I0930 21:11:10.457638   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.457650   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:10.457657   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:10.457712   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:10.490215   73900 cri.go:89] found id: ""
	I0930 21:11:10.490244   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.490253   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:10.490263   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:10.490278   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:10.554787   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:10.554814   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:10.554829   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:10.632428   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:10.632464   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:10.671018   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:10.671054   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:10.721187   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:10.721228   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:11.568643   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:13.568765   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.469296   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:12.968274   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.307487   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:12.307960   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:14.808087   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:13.234687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:13.250680   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:13.250778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:13.312468   73900 cri.go:89] found id: ""
	I0930 21:11:13.312499   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.312509   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:13.312516   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:13.312578   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:13.367051   73900 cri.go:89] found id: ""
	I0930 21:11:13.367073   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.367084   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:13.367091   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:13.367149   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:13.403019   73900 cri.go:89] found id: ""
	I0930 21:11:13.403055   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.403066   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:13.403074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:13.403135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:13.436942   73900 cri.go:89] found id: ""
	I0930 21:11:13.436967   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.436975   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:13.436981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:13.437047   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:13.470491   73900 cri.go:89] found id: ""
	I0930 21:11:13.470515   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.470523   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:13.470528   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:13.470619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:13.504078   73900 cri.go:89] found id: ""
	I0930 21:11:13.504112   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.504121   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:13.504127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:13.504201   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:13.536245   73900 cri.go:89] found id: ""
	I0930 21:11:13.536271   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.536292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:13.536297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:13.536357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:13.570794   73900 cri.go:89] found id: ""
	I0930 21:11:13.570817   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.570827   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:13.570836   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:13.570850   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:13.647919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:13.647941   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:13.647956   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:13.726113   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:13.726150   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:13.767916   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:13.767942   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:13.826362   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:13.826402   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.341252   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:16.354259   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:16.354344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:16.388627   73900 cri.go:89] found id: ""
	I0930 21:11:16.388650   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.388658   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:16.388663   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:16.388714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:16.424848   73900 cri.go:89] found id: ""
	I0930 21:11:16.424871   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.424878   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:16.424883   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:16.424941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:16.460604   73900 cri.go:89] found id: ""
	I0930 21:11:16.460626   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.460635   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:16.460640   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:16.460688   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:16.495908   73900 cri.go:89] found id: ""
	I0930 21:11:16.495932   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.495940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:16.495946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:16.496000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:16.531758   73900 cri.go:89] found id: ""
	I0930 21:11:16.531782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.531790   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:16.531796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:16.531853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:16.566756   73900 cri.go:89] found id: ""
	I0930 21:11:16.566782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.566792   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:16.566799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:16.566864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:16.601978   73900 cri.go:89] found id: ""
	I0930 21:11:16.602005   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.602012   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:16.602022   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:16.602081   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:16.636009   73900 cri.go:89] found id: ""
	I0930 21:11:16.636044   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.636056   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:16.636066   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:16.636079   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:16.688750   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:16.688786   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.702364   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:16.702404   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:16.767119   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:16.767175   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:16.767188   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:16.842052   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:16.842095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:15.571440   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:18.068441   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:15.469030   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:17.970779   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:17.307424   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:19.807193   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:19.380570   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:19.394687   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:19.394816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:19.427087   73900 cri.go:89] found id: ""
	I0930 21:11:19.427116   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.427124   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:19.427129   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:19.427178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:19.461074   73900 cri.go:89] found id: ""
	I0930 21:11:19.461098   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.461108   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:19.461122   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:19.461183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:19.494850   73900 cri.go:89] found id: ""
	I0930 21:11:19.494872   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.494880   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:19.494885   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:19.494943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:19.533448   73900 cri.go:89] found id: ""
	I0930 21:11:19.533480   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.533493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:19.533500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:19.533562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:19.569250   73900 cri.go:89] found id: ""
	I0930 21:11:19.569280   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.569291   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:19.569298   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:19.569383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:19.603182   73900 cri.go:89] found id: ""
	I0930 21:11:19.603206   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.603213   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:19.603219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:19.603268   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:19.637411   73900 cri.go:89] found id: ""
	I0930 21:11:19.637433   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.637441   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:19.637447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:19.637500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:19.672789   73900 cri.go:89] found id: ""
	I0930 21:11:19.672821   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.672831   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:19.672841   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:19.672854   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:19.755002   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:19.755039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:19.796499   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:19.796536   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:19.847235   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:19.847272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:19.861007   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:19.861032   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:19.931214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.431506   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:22.446129   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:22.446199   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:22.484093   73900 cri.go:89] found id: ""
	I0930 21:11:22.484119   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.484126   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:22.484132   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:22.484183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:22.516949   73900 cri.go:89] found id: ""
	I0930 21:11:22.516986   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.516994   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:22.517001   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:22.517056   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:22.550848   73900 cri.go:89] found id: ""
	I0930 21:11:22.550883   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.550898   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:22.550906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:22.550966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:22.586459   73900 cri.go:89] found id: ""
	I0930 21:11:22.586490   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.586498   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:22.586505   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:22.586627   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:22.620538   73900 cri.go:89] found id: ""
	I0930 21:11:22.620566   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.620578   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:22.620586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:22.620651   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:22.658256   73900 cri.go:89] found id: ""
	I0930 21:11:22.658279   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.658287   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:22.658292   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:22.658352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:22.690316   73900 cri.go:89] found id: ""
	I0930 21:11:22.690349   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.690365   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:22.690371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:22.690431   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:22.724234   73900 cri.go:89] found id: ""
	I0930 21:11:22.724264   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.724275   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:22.724285   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:22.724299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:20.570198   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:23.072974   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:20.468122   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.968686   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.307398   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:24.806972   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.777460   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:22.777503   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:22.790850   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:22.790879   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:22.866058   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.866079   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:22.866095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:22.947447   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:22.947488   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:25.486733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:25.499906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:25.499976   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:25.533819   73900 cri.go:89] found id: ""
	I0930 21:11:25.533842   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.533850   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:25.533857   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:25.533906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:25.568037   73900 cri.go:89] found id: ""
	I0930 21:11:25.568059   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.568066   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:25.568071   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:25.568129   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:25.601784   73900 cri.go:89] found id: ""
	I0930 21:11:25.601811   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.601819   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:25.601824   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:25.601876   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:25.638048   73900 cri.go:89] found id: ""
	I0930 21:11:25.638070   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.638078   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:25.638084   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:25.638140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:25.669946   73900 cri.go:89] found id: ""
	I0930 21:11:25.669968   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.669976   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:25.669981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:25.670028   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:25.701928   73900 cri.go:89] found id: ""
	I0930 21:11:25.701953   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.701961   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:25.701967   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:25.702025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:25.744295   73900 cri.go:89] found id: ""
	I0930 21:11:25.744327   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.744335   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:25.744341   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:25.744398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:25.780175   73900 cri.go:89] found id: ""
	I0930 21:11:25.780205   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.780213   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:25.780221   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:25.780232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:25.828774   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:25.828812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:25.842624   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:25.842649   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:25.916408   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:25.916451   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:25.916469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:25.997896   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:25.997932   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:25.570148   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:28.068628   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:25.467356   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:27.467782   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:29.467936   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:27.306939   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:29.807156   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:28.540994   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:28.553841   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:28.553904   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:28.588718   73900 cri.go:89] found id: ""
	I0930 21:11:28.588745   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.588754   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:28.588763   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:28.588809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:28.636210   73900 cri.go:89] found id: ""
	I0930 21:11:28.636237   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.636245   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:28.636250   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:28.636312   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:28.668714   73900 cri.go:89] found id: ""
	I0930 21:11:28.668743   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.668751   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:28.668757   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:28.668804   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:28.700413   73900 cri.go:89] found id: ""
	I0930 21:11:28.700449   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.700462   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:28.700469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:28.700522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:28.733409   73900 cri.go:89] found id: ""
	I0930 21:11:28.733433   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.733441   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:28.733446   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:28.733494   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:28.766917   73900 cri.go:89] found id: ""
	I0930 21:11:28.766957   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.766970   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:28.766979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:28.767046   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:28.801759   73900 cri.go:89] found id: ""
	I0930 21:11:28.801788   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.801798   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:28.801805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:28.801851   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:28.840724   73900 cri.go:89] found id: ""
	I0930 21:11:28.840761   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.840770   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:28.840790   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:28.840805   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:28.854426   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:28.854465   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:28.926650   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:28.926675   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:28.926690   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:29.005513   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:29.005569   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:29.047077   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:29.047102   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.603193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:31.615563   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:31.615631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:31.647656   73900 cri.go:89] found id: ""
	I0930 21:11:31.647685   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.647693   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:31.647699   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:31.647748   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:31.680004   73900 cri.go:89] found id: ""
	I0930 21:11:31.680037   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.680048   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:31.680056   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:31.680120   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:31.712562   73900 cri.go:89] found id: ""
	I0930 21:11:31.712588   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.712596   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:31.712602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:31.712650   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:31.747692   73900 cri.go:89] found id: ""
	I0930 21:11:31.747724   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.747732   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:31.747738   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:31.747803   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:31.781441   73900 cri.go:89] found id: ""
	I0930 21:11:31.781464   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.781472   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:31.781478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:31.781532   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:31.822227   73900 cri.go:89] found id: ""
	I0930 21:11:31.822252   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.822259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:31.822265   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:31.822322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:31.856531   73900 cri.go:89] found id: ""
	I0930 21:11:31.856555   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.856563   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:31.856568   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:31.856631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:31.894562   73900 cri.go:89] found id: ""
	I0930 21:11:31.894585   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.894593   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:31.894602   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:31.894618   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.946233   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:31.946271   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:31.960713   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:31.960744   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:32.036479   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:32.036497   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:32.036509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:32.111442   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:32.111477   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:30.068975   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:32.069794   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:31.468374   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:33.468986   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:31.809169   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:34.307372   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:34.651545   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:34.664058   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:34.664121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:34.697506   73900 cri.go:89] found id: ""
	I0930 21:11:34.697530   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.697539   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:34.697545   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:34.697599   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:34.730297   73900 cri.go:89] found id: ""
	I0930 21:11:34.730326   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.730334   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:34.730339   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:34.730390   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:34.762251   73900 cri.go:89] found id: ""
	I0930 21:11:34.762278   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.762286   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:34.762291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:34.762358   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:34.803028   73900 cri.go:89] found id: ""
	I0930 21:11:34.803058   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.803068   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:34.803074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:34.803122   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:34.840063   73900 cri.go:89] found id: ""
	I0930 21:11:34.840097   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.840110   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:34.840118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:34.840192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:34.878641   73900 cri.go:89] found id: ""
	I0930 21:11:34.878675   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.878686   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:34.878693   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:34.878745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:34.910799   73900 cri.go:89] found id: ""
	I0930 21:11:34.910823   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.910830   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:34.910837   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:34.910899   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:34.947748   73900 cri.go:89] found id: ""
	I0930 21:11:34.947782   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.947795   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:34.947806   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:34.947821   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:35.026490   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:35.026514   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:35.026529   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:35.115504   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:35.115559   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:35.158629   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:35.158659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:35.211011   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:35.211052   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:37.726260   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:37.739137   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:37.739222   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:34.568166   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:36.569720   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:39.069371   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:35.968574   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:38.467872   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:36.807057   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:38.807376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:37.779980   73900 cri.go:89] found id: ""
	I0930 21:11:37.780009   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.780018   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:37.780024   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:37.780076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:37.813936   73900 cri.go:89] found id: ""
	I0930 21:11:37.813961   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.813969   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:37.813975   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:37.814021   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:37.851150   73900 cri.go:89] found id: ""
	I0930 21:11:37.851176   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.851186   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:37.851193   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:37.851256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:37.891855   73900 cri.go:89] found id: ""
	I0930 21:11:37.891881   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.891889   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:37.891894   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:37.891943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:37.929234   73900 cri.go:89] found id: ""
	I0930 21:11:37.929269   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.929281   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:37.929288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:37.929359   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:37.962350   73900 cri.go:89] found id: ""
	I0930 21:11:37.962378   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.962386   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:37.962391   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:37.962441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:37.996727   73900 cri.go:89] found id: ""
	I0930 21:11:37.996752   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.996760   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:37.996765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:37.996819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:38.029959   73900 cri.go:89] found id: ""
	I0930 21:11:38.029991   73900 logs.go:276] 0 containers: []
	W0930 21:11:38.029999   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:38.030008   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:38.030019   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:38.079836   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:38.079875   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:38.093208   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:38.093236   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:38.168839   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:38.168862   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:38.168873   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:38.244747   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:38.244783   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:40.788841   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:40.802419   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:40.802491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:40.837138   73900 cri.go:89] found id: ""
	I0930 21:11:40.837175   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.837186   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:40.837193   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:40.837255   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:40.870947   73900 cri.go:89] found id: ""
	I0930 21:11:40.870977   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.870987   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:40.870993   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:40.871040   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:40.905004   73900 cri.go:89] found id: ""
	I0930 21:11:40.905033   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.905046   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:40.905053   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:40.905104   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:40.936909   73900 cri.go:89] found id: ""
	I0930 21:11:40.936937   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.936945   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:40.936952   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:40.937015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:40.972601   73900 cri.go:89] found id: ""
	I0930 21:11:40.972630   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.972641   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:40.972646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:40.972704   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:41.007539   73900 cri.go:89] found id: ""
	I0930 21:11:41.007583   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.007594   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:41.007602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:41.007661   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:41.042049   73900 cri.go:89] found id: ""
	I0930 21:11:41.042075   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.042084   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:41.042091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:41.042153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:41.075313   73900 cri.go:89] found id: ""
	I0930 21:11:41.075398   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.075414   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:41.075424   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:41.075440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:41.128683   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:41.128726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:41.142533   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:41.142560   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:41.210149   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:41.210176   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:41.210191   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:41.286547   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:41.286590   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:41.070042   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.570819   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:40.969912   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.468434   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:40.808294   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.307628   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.828902   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:43.842047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:43.842127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:43.876147   73900 cri.go:89] found id: ""
	I0930 21:11:43.876177   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.876187   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:43.876194   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:43.876287   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:43.916351   73900 cri.go:89] found id: ""
	I0930 21:11:43.916383   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.916394   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:43.916404   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:43.916457   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:43.948853   73900 cri.go:89] found id: ""
	I0930 21:11:43.948883   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.948894   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:43.948900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:43.948967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:43.983525   73900 cri.go:89] found id: ""
	I0930 21:11:43.983577   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.983589   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:43.983597   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:43.983656   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:44.021560   73900 cri.go:89] found id: ""
	I0930 21:11:44.021594   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.021606   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:44.021614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:44.021684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:44.057307   73900 cri.go:89] found id: ""
	I0930 21:11:44.057342   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.057353   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:44.057361   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:44.057418   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:44.091120   73900 cri.go:89] found id: ""
	I0930 21:11:44.091145   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.091155   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:44.091162   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:44.091223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:44.125781   73900 cri.go:89] found id: ""
	I0930 21:11:44.125808   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.125817   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:44.125827   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:44.125842   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:44.138699   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:44.138726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:44.208976   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:44.209009   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:44.209026   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:44.285552   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:44.285593   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:44.323412   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:44.323449   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:46.875210   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:46.888532   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:46.888596   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:46.921260   73900 cri.go:89] found id: ""
	I0930 21:11:46.921285   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.921293   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:46.921299   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:46.921357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:46.954645   73900 cri.go:89] found id: ""
	I0930 21:11:46.954675   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.954683   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:46.954688   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:46.954749   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:46.988424   73900 cri.go:89] found id: ""
	I0930 21:11:46.988457   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.988468   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:46.988475   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:46.988535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:47.022635   73900 cri.go:89] found id: ""
	I0930 21:11:47.022664   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.022675   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:47.022682   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:47.022744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:47.056497   73900 cri.go:89] found id: ""
	I0930 21:11:47.056523   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.056530   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:47.056536   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:47.056595   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:47.094983   73900 cri.go:89] found id: ""
	I0930 21:11:47.095011   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.095021   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:47.095028   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:47.095097   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:47.147567   73900 cri.go:89] found id: ""
	I0930 21:11:47.147595   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.147606   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:47.147613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:47.147692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:47.184878   73900 cri.go:89] found id: ""
	I0930 21:11:47.184908   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.184919   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:47.184930   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:47.184943   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:47.258581   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:47.258615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:47.303068   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:47.303100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:47.358749   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:47.358789   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:47.372492   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:47.372531   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:47.443984   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:46.069421   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:48.569013   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:45.968422   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:47.968876   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:45.808341   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:48.306627   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:49.944644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:49.958045   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:49.958124   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:49.993053   73900 cri.go:89] found id: ""
	I0930 21:11:49.993088   73900 logs.go:276] 0 containers: []
	W0930 21:11:49.993100   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:49.993107   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:49.993168   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:50.026171   73900 cri.go:89] found id: ""
	I0930 21:11:50.026197   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.026205   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:50.026210   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:50.026269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:50.060462   73900 cri.go:89] found id: ""
	I0930 21:11:50.060492   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.060502   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:50.060509   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:50.060567   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:50.095385   73900 cri.go:89] found id: ""
	I0930 21:11:50.095414   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.095425   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:50.095432   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:50.095507   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:50.127275   73900 cri.go:89] found id: ""
	I0930 21:11:50.127300   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.127308   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:50.127318   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:50.127378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:50.159810   73900 cri.go:89] found id: ""
	I0930 21:11:50.159836   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.159845   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:50.159850   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:50.159906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:50.191651   73900 cri.go:89] found id: ""
	I0930 21:11:50.191684   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.191695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:50.191702   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:50.191774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:50.225772   73900 cri.go:89] found id: ""
	I0930 21:11:50.225799   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.225809   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:50.225819   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:50.225837   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:50.310189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:50.310223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:50.348934   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:50.348965   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:50.400666   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:50.400703   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:50.415810   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:50.415843   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:50.483773   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:51.069928   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:53.070065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:50.469516   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.968367   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:54.968624   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:50.307903   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.807610   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.984701   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:52.997669   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:52.997745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:53.034012   73900 cri.go:89] found id: ""
	I0930 21:11:53.034044   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.034055   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:53.034063   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:53.034121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:53.068192   73900 cri.go:89] found id: ""
	I0930 21:11:53.068215   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.068222   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:53.068228   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:53.068285   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:53.104683   73900 cri.go:89] found id: ""
	I0930 21:11:53.104710   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.104719   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:53.104724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:53.104778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:53.138713   73900 cri.go:89] found id: ""
	I0930 21:11:53.138745   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.138753   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:53.138759   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:53.138814   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:53.173955   73900 cri.go:89] found id: ""
	I0930 21:11:53.173982   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.173994   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:53.174001   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:53.174060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:53.205942   73900 cri.go:89] found id: ""
	I0930 21:11:53.205970   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.205980   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:53.205987   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:53.206052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:53.241739   73900 cri.go:89] found id: ""
	I0930 21:11:53.241767   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.241776   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:53.241782   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:53.241832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:53.275328   73900 cri.go:89] found id: ""
	I0930 21:11:53.275363   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.275372   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:53.275381   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:53.275397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:53.313732   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:53.313761   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:53.364974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:53.365011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:53.377970   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:53.377999   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:53.445341   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:53.445370   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:53.445388   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.025958   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:56.038367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:56.038434   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:56.074721   73900 cri.go:89] found id: ""
	I0930 21:11:56.074756   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.074767   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:56.074781   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:56.074846   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:56.111491   73900 cri.go:89] found id: ""
	I0930 21:11:56.111525   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.111550   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:56.111572   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:56.111626   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:56.145660   73900 cri.go:89] found id: ""
	I0930 21:11:56.145690   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.145701   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:56.145708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:56.145769   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:56.180865   73900 cri.go:89] found id: ""
	I0930 21:11:56.180891   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.180901   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:56.180908   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:56.180971   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:56.213681   73900 cri.go:89] found id: ""
	I0930 21:11:56.213707   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.213716   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:56.213721   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:56.213772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:56.246683   73900 cri.go:89] found id: ""
	I0930 21:11:56.246711   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.246719   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:56.246724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:56.246774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:56.279651   73900 cri.go:89] found id: ""
	I0930 21:11:56.279679   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.279687   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:56.279692   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:56.279746   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:56.316701   73900 cri.go:89] found id: ""
	I0930 21:11:56.316727   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.316735   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:56.316743   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:56.316753   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:56.329879   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:56.329905   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:56.399919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:56.399949   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:56.399964   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.480200   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:56.480237   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:56.517755   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:56.517782   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:55.568782   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:58.068718   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:57.468492   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.968123   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:55.307809   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:57.308095   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.807355   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.070677   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:59.085884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:59.085956   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:59.119580   73900 cri.go:89] found id: ""
	I0930 21:11:59.119606   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.119615   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:59.119621   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:59.119667   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:59.152087   73900 cri.go:89] found id: ""
	I0930 21:11:59.152111   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.152120   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:59.152127   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:59.152172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:59.186177   73900 cri.go:89] found id: ""
	I0930 21:11:59.186205   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.186213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:59.186220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:59.186276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:59.218800   73900 cri.go:89] found id: ""
	I0930 21:11:59.218821   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.218829   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:59.218835   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:59.218893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:59.254335   73900 cri.go:89] found id: ""
	I0930 21:11:59.254361   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.254372   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:59.254378   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:59.254432   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:59.292406   73900 cri.go:89] found id: ""
	I0930 21:11:59.292441   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.292453   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:59.292460   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:59.292522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:59.333352   73900 cri.go:89] found id: ""
	I0930 21:11:59.333388   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.333399   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:59.333406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:59.333481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:59.377031   73900 cri.go:89] found id: ""
	I0930 21:11:59.377056   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.377064   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:59.377072   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:59.377084   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:59.392626   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:59.392655   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:59.473714   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:59.473741   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:59.473754   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:59.548895   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:59.548931   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:59.589007   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:59.589039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.139243   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:02.152335   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:02.152415   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:02.186942   73900 cri.go:89] found id: ""
	I0930 21:12:02.186980   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.186991   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:02.186999   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:02.187061   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:02.219738   73900 cri.go:89] found id: ""
	I0930 21:12:02.219759   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.219768   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:02.219773   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:02.219820   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:02.253667   73900 cri.go:89] found id: ""
	I0930 21:12:02.253698   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.253707   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:02.253712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:02.253760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:02.290078   73900 cri.go:89] found id: ""
	I0930 21:12:02.290105   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.290115   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:02.290122   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:02.290182   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:02.326408   73900 cri.go:89] found id: ""
	I0930 21:12:02.326436   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.326448   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:02.326455   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:02.326509   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:02.360608   73900 cri.go:89] found id: ""
	I0930 21:12:02.360641   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.360649   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:02.360655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:02.360714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:02.396140   73900 cri.go:89] found id: ""
	I0930 21:12:02.396166   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.396176   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:02.396182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:02.396236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:02.429905   73900 cri.go:89] found id: ""
	I0930 21:12:02.429947   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.429958   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:02.429968   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:02.429986   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:02.506600   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:02.506645   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:02.549325   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:02.549354   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.603614   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:02.603659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:02.618832   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:02.618859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:02.692491   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:00.070569   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:02.569436   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:01.968240   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:04.468583   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:02.306973   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:04.308182   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:05.193131   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:05.206133   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:05.206192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:05.238403   73900 cri.go:89] found id: ""
	I0930 21:12:05.238431   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.238439   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:05.238447   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:05.238523   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:05.271261   73900 cri.go:89] found id: ""
	I0930 21:12:05.271290   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.271303   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:05.271310   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:05.271378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:05.307718   73900 cri.go:89] found id: ""
	I0930 21:12:05.307749   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.307760   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:05.307767   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:05.307832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:05.341336   73900 cri.go:89] found id: ""
	I0930 21:12:05.341379   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.341390   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:05.341398   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:05.341461   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:05.374998   73900 cri.go:89] found id: ""
	I0930 21:12:05.375024   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.375032   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:05.375037   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:05.375085   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:05.410133   73900 cri.go:89] found id: ""
	I0930 21:12:05.410163   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.410174   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:05.410182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:05.410248   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:05.446197   73900 cri.go:89] found id: ""
	I0930 21:12:05.446227   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.446238   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:05.446246   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:05.446305   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:05.480638   73900 cri.go:89] found id: ""
	I0930 21:12:05.480667   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.480683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:05.480691   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:05.480702   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:05.532473   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:05.532512   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:05.547068   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:05.547096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:05.621444   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:05.621472   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:05.621487   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:05.707712   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:05.707767   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:05.068363   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:07.069531   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:06.969695   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:06.969727   73375 pod_ready.go:82] duration metric: took 4m0.008001407s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:06.969736   73375 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:06.969743   73375 pod_ready.go:39] duration metric: took 4m4.053054405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:06.969757   73375 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:12:06.969781   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:06.969835   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:07.024708   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:07.024730   73375 cri.go:89] found id: ""
	I0930 21:12:07.024737   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:07.024805   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.029375   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:07.029439   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:07.063656   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:07.063684   73375 cri.go:89] found id: ""
	I0930 21:12:07.063695   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:07.063754   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.068071   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:07.068126   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:07.102636   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:07.102665   73375 cri.go:89] found id: ""
	I0930 21:12:07.102675   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:07.102733   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.106711   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:07.106791   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:07.142676   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:07.142698   73375 cri.go:89] found id: ""
	I0930 21:12:07.142708   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:07.142766   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.146979   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:07.147041   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:07.189192   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:07.189223   73375 cri.go:89] found id: ""
	I0930 21:12:07.189232   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:07.189283   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.193408   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:07.193484   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:07.230538   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:07.230562   73375 cri.go:89] found id: ""
	I0930 21:12:07.230571   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:07.230630   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.235482   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:07.235573   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:07.274180   73375 cri.go:89] found id: ""
	I0930 21:12:07.274215   73375 logs.go:276] 0 containers: []
	W0930 21:12:07.274226   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:07.274233   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:07.274312   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:07.312851   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:07.312876   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:07.312882   73375 cri.go:89] found id: ""
	I0930 21:12:07.312890   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:07.312947   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.317386   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.321912   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:07.321940   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:07.361674   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:07.361701   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:07.398555   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:07.398615   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:07.432511   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:07.432540   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:07.919639   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:07.919678   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:07.935038   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:07.935067   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:08.059404   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:08.059435   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:08.114569   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:08.114605   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:08.153409   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:08.153447   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.193155   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:08.193187   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:08.260774   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:08.260814   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:08.351488   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:08.351519   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:08.387971   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:08.388012   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:06.805971   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:08.807886   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:08.248038   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:08.261409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:08.261485   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:08.305564   73900 cri.go:89] found id: ""
	I0930 21:12:08.305591   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.305601   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:08.305610   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:08.305669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:08.347816   73900 cri.go:89] found id: ""
	I0930 21:12:08.347844   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.347852   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:08.347858   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:08.347927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:08.381662   73900 cri.go:89] found id: ""
	I0930 21:12:08.381695   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.381705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:08.381712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:08.381829   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:08.427366   73900 cri.go:89] found id: ""
	I0930 21:12:08.427396   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.427406   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:08.427413   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:08.427476   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:08.463419   73900 cri.go:89] found id: ""
	I0930 21:12:08.463443   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.463451   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:08.463457   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:08.463508   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:08.496999   73900 cri.go:89] found id: ""
	I0930 21:12:08.497023   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.497033   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:08.497040   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:08.497098   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:08.530410   73900 cri.go:89] found id: ""
	I0930 21:12:08.530434   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.530442   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:08.530447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:08.530495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:08.563191   73900 cri.go:89] found id: ""
	I0930 21:12:08.563224   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.563235   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:08.563244   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:08.563258   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:08.640305   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:08.640341   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.676404   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:08.676431   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:08.729676   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:08.729736   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:08.743282   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:08.743310   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:08.811334   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.311643   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:11.329153   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:11.329229   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:11.369804   73900 cri.go:89] found id: ""
	I0930 21:12:11.369829   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.369838   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:11.369843   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:11.369896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:11.408530   73900 cri.go:89] found id: ""
	I0930 21:12:11.408558   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.408569   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:11.408580   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:11.408663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.446123   73900 cri.go:89] found id: ""
	I0930 21:12:11.446147   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.446155   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:11.446160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:11.446206   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:11.484019   73900 cri.go:89] found id: ""
	I0930 21:12:11.484044   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.484052   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:11.484057   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:11.484118   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:11.521934   73900 cri.go:89] found id: ""
	I0930 21:12:11.521961   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.521971   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:11.521979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:11.522042   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:11.561253   73900 cri.go:89] found id: ""
	I0930 21:12:11.561283   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.561293   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:11.561299   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:11.561352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:11.602610   73900 cri.go:89] found id: ""
	I0930 21:12:11.602637   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.602648   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:11.602655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:11.602760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:11.637146   73900 cri.go:89] found id: ""
	I0930 21:12:11.637174   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.637185   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:11.637194   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:11.637208   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:11.707627   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.707651   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:11.707668   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:11.786047   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:11.786091   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:11.827128   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:11.827157   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:11.885504   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:11.885542   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:09.569584   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:11.570031   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:14.068184   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:10.950921   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:10.967834   73375 api_server.go:72] duration metric: took 4m15.348038807s to wait for apiserver process to appear ...
	I0930 21:12:10.967876   73375 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:12:10.967922   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:10.967990   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:11.006632   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:11.006667   73375 cri.go:89] found id: ""
	I0930 21:12:11.006677   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:11.006738   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.010931   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:11.010994   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:11.045855   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:11.045882   73375 cri.go:89] found id: ""
	I0930 21:12:11.045893   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:11.045953   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.050058   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:11.050134   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.090954   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:11.090980   73375 cri.go:89] found id: ""
	I0930 21:12:11.090990   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:11.091041   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.095073   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:11.095150   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:11.137413   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:11.137448   73375 cri.go:89] found id: ""
	I0930 21:12:11.137458   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:11.137516   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.141559   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:11.141638   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:11.176921   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:11.176952   73375 cri.go:89] found id: ""
	I0930 21:12:11.176961   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:11.177010   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.181095   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:11.181158   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:11.215117   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:11.215141   73375 cri.go:89] found id: ""
	I0930 21:12:11.215148   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:11.215195   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.218947   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:11.219003   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:11.253901   73375 cri.go:89] found id: ""
	I0930 21:12:11.253937   73375 logs.go:276] 0 containers: []
	W0930 21:12:11.253948   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:11.253955   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:11.254010   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:11.293408   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:11.293434   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:11.293440   73375 cri.go:89] found id: ""
	I0930 21:12:11.293448   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:11.293562   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.297829   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.302572   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:11.302596   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:11.378000   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:11.378037   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:11.415382   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:11.415414   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:11.453703   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:11.453729   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:11.517749   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:11.517780   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:11.556543   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:11.556576   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:12.023270   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:12.023310   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:12.071138   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:12.071170   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:12.086915   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:12.086944   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:12.200046   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:12.200077   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:12.241447   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:12.241475   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:12.296574   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:12.296607   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:12.341982   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:12.342009   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:14.877590   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:12:14.882913   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0930 21:12:14.884088   73375 api_server.go:141] control plane version: v1.31.1
	I0930 21:12:14.884106   73375 api_server.go:131] duration metric: took 3.916223308s to wait for apiserver health ...
	I0930 21:12:14.884113   73375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:12:14.884134   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:14.884185   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:14.926932   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:14.926952   73375 cri.go:89] found id: ""
	I0930 21:12:14.926960   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:14.927003   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:14.931044   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:14.931106   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:14.967622   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:14.967645   73375 cri.go:89] found id: ""
	I0930 21:12:14.967652   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:14.967698   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:14.972152   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:14.972221   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.307501   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:13.307687   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:14.400848   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:14.413794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:14.413882   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:14.449799   73900 cri.go:89] found id: ""
	I0930 21:12:14.449830   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.449841   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:14.449849   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:14.449902   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:14.486301   73900 cri.go:89] found id: ""
	I0930 21:12:14.486330   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.486357   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:14.486365   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:14.486427   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:14.520451   73900 cri.go:89] found id: ""
	I0930 21:12:14.520479   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.520487   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:14.520497   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:14.520558   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:14.554056   73900 cri.go:89] found id: ""
	I0930 21:12:14.554095   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.554107   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:14.554114   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:14.554178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:14.594054   73900 cri.go:89] found id: ""
	I0930 21:12:14.594080   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.594088   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:14.594094   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:14.594142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:14.630225   73900 cri.go:89] found id: ""
	I0930 21:12:14.630255   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.630278   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:14.630284   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:14.630335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:14.663006   73900 cri.go:89] found id: ""
	I0930 21:12:14.663043   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.663054   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:14.663061   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:14.663119   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:14.699815   73900 cri.go:89] found id: ""
	I0930 21:12:14.699845   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.699858   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:14.699870   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:14.699886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:14.751465   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:14.751509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:14.766401   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:14.766432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:14.832979   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:14.833002   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:14.833016   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:14.918011   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:14.918051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.458886   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:17.471833   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:17.471918   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:17.505109   73900 cri.go:89] found id: ""
	I0930 21:12:17.505135   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.505145   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:17.505151   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:17.505213   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:17.538091   73900 cri.go:89] found id: ""
	I0930 21:12:17.538118   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.538129   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:17.538136   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:17.538308   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:17.571668   73900 cri.go:89] found id: ""
	I0930 21:12:17.571694   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.571705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:17.571712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:17.571770   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:17.607391   73900 cri.go:89] found id: ""
	I0930 21:12:17.607431   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.607442   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:17.607452   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:17.607519   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:17.643271   73900 cri.go:89] found id: ""
	I0930 21:12:17.643297   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.643305   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:17.643313   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:17.643382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:17.676653   73900 cri.go:89] found id: ""
	I0930 21:12:17.676687   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.676698   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:17.676708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:17.676772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:17.709570   73900 cri.go:89] found id: ""
	I0930 21:12:17.709602   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.709610   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:17.709615   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:17.709671   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:17.747857   73900 cri.go:89] found id: ""
	I0930 21:12:17.747883   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.747891   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:17.747902   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:17.747915   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:15.010874   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:15.010898   73375 cri.go:89] found id: ""
	I0930 21:12:15.010905   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:15.010947   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.015490   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:15.015582   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:15.051182   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:15.051210   73375 cri.go:89] found id: ""
	I0930 21:12:15.051220   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:15.051291   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.055057   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:15.055107   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:15.093126   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:15.093150   73375 cri.go:89] found id: ""
	I0930 21:12:15.093159   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:15.093214   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.097138   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:15.097200   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:15.131676   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:15.131704   73375 cri.go:89] found id: ""
	I0930 21:12:15.131716   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:15.131773   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.135550   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:15.135620   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:15.170579   73375 cri.go:89] found id: ""
	I0930 21:12:15.170604   73375 logs.go:276] 0 containers: []
	W0930 21:12:15.170612   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:15.170618   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:15.170672   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:15.205190   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:15.205216   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:15.205222   73375 cri.go:89] found id: ""
	I0930 21:12:15.205231   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:15.205287   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.209426   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.212981   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:15.213002   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:15.281543   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:15.281582   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:15.325855   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:15.325895   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:15.367382   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:15.367429   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:15.441395   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:15.441432   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:15.482487   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:15.482518   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:15.520298   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:15.520335   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:15.572596   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:15.572626   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:15.618087   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:15.618120   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:15.634125   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:15.634151   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:15.744355   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:15.744390   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:15.799312   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:15.799345   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:15.838934   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:15.838969   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:18.759947   73375 system_pods.go:59] 8 kube-system pods found
	I0930 21:12:18.759976   73375 system_pods.go:61] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running
	I0930 21:12:18.759981   73375 system_pods.go:61] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running
	I0930 21:12:18.759985   73375 system_pods.go:61] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running
	I0930 21:12:18.759989   73375 system_pods.go:61] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running
	I0930 21:12:18.759992   73375 system_pods.go:61] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:12:18.759995   73375 system_pods.go:61] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running
	I0930 21:12:18.760001   73375 system_pods.go:61] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:18.760006   73375 system_pods.go:61] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:12:18.760016   73375 system_pods.go:74] duration metric: took 3.875896906s to wait for pod list to return data ...
	I0930 21:12:18.760024   73375 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:12:18.762755   73375 default_sa.go:45] found service account: "default"
	I0930 21:12:18.762777   73375 default_sa.go:55] duration metric: took 2.746721ms for default service account to be created ...
	I0930 21:12:18.762787   73375 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:12:18.769060   73375 system_pods.go:86] 8 kube-system pods found
	I0930 21:12:18.769086   73375 system_pods.go:89] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running
	I0930 21:12:18.769091   73375 system_pods.go:89] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running
	I0930 21:12:18.769095   73375 system_pods.go:89] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running
	I0930 21:12:18.769099   73375 system_pods.go:89] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running
	I0930 21:12:18.769104   73375 system_pods.go:89] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:12:18.769107   73375 system_pods.go:89] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running
	I0930 21:12:18.769113   73375 system_pods.go:89] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:18.769129   73375 system_pods.go:89] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:12:18.769136   73375 system_pods.go:126] duration metric: took 6.344583ms to wait for k8s-apps to be running ...
	I0930 21:12:18.769144   73375 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:12:18.769183   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:18.785488   73375 system_svc.go:56] duration metric: took 16.335135ms WaitForService to wait for kubelet
	I0930 21:12:18.785544   73375 kubeadm.go:582] duration metric: took 4m23.165751441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:12:18.785572   73375 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:12:18.789308   73375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:12:18.789340   73375 node_conditions.go:123] node cpu capacity is 2
	I0930 21:12:18.789356   73375 node_conditions.go:105] duration metric: took 3.778609ms to run NodePressure ...
	I0930 21:12:18.789370   73375 start.go:241] waiting for startup goroutines ...
	I0930 21:12:18.789379   73375 start.go:246] waiting for cluster config update ...
	I0930 21:12:18.789394   73375 start.go:255] writing updated cluster config ...
	I0930 21:12:18.789688   73375 ssh_runner.go:195] Run: rm -f paused
	I0930 21:12:18.837384   73375 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:12:18.839699   73375 out.go:177] * Done! kubectl is now configured to use "no-preload-997816" cluster and "default" namespace by default
	I0930 21:12:16.070108   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:18.569568   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:15.308534   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:15.308581   73707 pod_ready.go:82] duration metric: took 4m0.007893146s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:15.308595   73707 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:15.308605   73707 pod_ready.go:39] duration metric: took 4m2.806797001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:15.308621   73707 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:12:15.308657   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:15.308722   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:15.353287   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:15.353348   73707 cri.go:89] found id: ""
	I0930 21:12:15.353359   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:15.353416   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.357602   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:15.357696   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:15.399289   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:15.399325   73707 cri.go:89] found id: ""
	I0930 21:12:15.399332   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:15.399377   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.404757   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:15.404832   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:15.454396   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:15.454423   73707 cri.go:89] found id: ""
	I0930 21:12:15.454433   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:15.454493   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.458660   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:15.458743   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:15.493941   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:15.493971   73707 cri.go:89] found id: ""
	I0930 21:12:15.493982   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:15.494055   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.498541   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:15.498628   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:15.535354   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:15.535385   73707 cri.go:89] found id: ""
	I0930 21:12:15.535395   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:15.535454   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.540097   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:15.540168   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:15.583969   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:15.583996   73707 cri.go:89] found id: ""
	I0930 21:12:15.584003   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:15.584051   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.589193   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:15.589260   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:15.629413   73707 cri.go:89] found id: ""
	I0930 21:12:15.629440   73707 logs.go:276] 0 containers: []
	W0930 21:12:15.629449   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:15.629454   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:15.629506   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:15.670129   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:15.670160   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:15.670166   73707 cri.go:89] found id: ""
	I0930 21:12:15.670175   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:15.670237   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.674227   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.678252   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:15.678276   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:15.758280   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:15.758319   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:15.778191   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:15.778222   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:15.930379   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:15.930422   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:15.966732   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:15.966759   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:16.004304   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:16.004337   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:16.043705   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:16.043733   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:16.600173   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:16.600210   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:16.651837   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:16.651868   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:16.695122   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:16.695155   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:16.737622   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:16.737671   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:16.772913   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:16.772944   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:16.808196   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:16.808224   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.368150   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:19.385771   73707 api_server.go:72] duration metric: took 4m14.101602019s to wait for apiserver process to appear ...
	I0930 21:12:19.385798   73707 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:12:19.385831   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:19.385889   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:19.421325   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:19.421354   73707 cri.go:89] found id: ""
	I0930 21:12:19.421364   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:19.421426   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.428045   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:19.428107   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:19.466034   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:19.466054   73707 cri.go:89] found id: ""
	I0930 21:12:19.466061   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:19.466102   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.470155   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:19.470222   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:19.504774   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:19.504799   73707 cri.go:89] found id: ""
	I0930 21:12:19.504806   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:19.504869   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.509044   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:19.509134   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:19.544204   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:19.544228   73707 cri.go:89] found id: ""
	I0930 21:12:19.544235   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:19.544293   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.549103   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:19.549194   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:19.591381   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:19.591416   73707 cri.go:89] found id: ""
	I0930 21:12:19.591425   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:19.591472   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.595522   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:19.595621   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:19.634816   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.634841   73707 cri.go:89] found id: ""
	I0930 21:12:19.634850   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:19.634894   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.639391   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:19.639450   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:19.675056   73707 cri.go:89] found id: ""
	I0930 21:12:19.675084   73707 logs.go:276] 0 containers: []
	W0930 21:12:19.675095   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:19.675102   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:19.675159   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:19.708641   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:19.708666   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:19.708672   73707 cri.go:89] found id: ""
	I0930 21:12:19.708682   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:19.708738   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.712636   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.716653   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:19.716680   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:19.785159   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:19.785203   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:19.823462   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:19.823490   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:19.856776   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:19.856808   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:19.893919   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:19.893948   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:19.930932   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:19.930978   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.988120   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:19.988164   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:20.027576   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:20.027618   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:20.041523   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:20.041557   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:20.157598   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:20.157630   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:20.213353   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:20.213384   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:20.254502   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:20.254533   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:17.824584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:17.824623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.862613   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:17.862643   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:17.915954   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:17.915992   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:17.929824   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:17.929853   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:17.999697   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:20.500449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:20.514042   73900 kubeadm.go:597] duration metric: took 4m1.91059878s to restartPrimaryControlPlane
	W0930 21:12:20.514119   73900 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 21:12:20.514158   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:12:21.675376   73900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.161176988s)
	I0930 21:12:21.675465   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:21.689467   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:12:21.698504   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:12:21.708418   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:12:21.708437   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:12:21.708483   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:12:21.716960   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:12:21.717019   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:12:21.727610   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:12:21.736212   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:12:21.736275   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:12:21.745512   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.754299   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:12:21.754366   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.763724   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:12:21.772521   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:12:21.772595   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:12:21.782980   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:12:21.850463   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:12:21.850558   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:12:21.991521   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:12:21.991706   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:12:21.991849   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:12:22.174876   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:12:22.177037   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:12:22.177155   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:12:22.177253   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:12:22.177379   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:12:22.178789   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:12:22.178860   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:12:22.178907   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:12:22.178961   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:12:22.179017   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:12:22.179139   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:12:22.179247   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:12:22.179310   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:12:22.179398   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:12:22.253256   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:12:22.661237   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:12:22.947987   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:12:23.170995   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:12:23.184583   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:12:23.185770   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:12:23.185813   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:12:23.334769   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:12:21.069777   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:23.070328   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:20.696951   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:20.696989   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:23.236734   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:12:23.241215   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 200:
	ok
	I0930 21:12:23.242629   73707 api_server.go:141] control plane version: v1.31.1
	I0930 21:12:23.242651   73707 api_server.go:131] duration metric: took 3.856847284s to wait for apiserver health ...
	I0930 21:12:23.242660   73707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:12:23.242680   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:23.242724   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:23.279601   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:23.279626   73707 cri.go:89] found id: ""
	I0930 21:12:23.279633   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:23.279692   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.283900   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:23.283977   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:23.320360   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:23.320397   73707 cri.go:89] found id: ""
	I0930 21:12:23.320410   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:23.320472   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.324745   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:23.324825   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:23.368001   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:23.368024   73707 cri.go:89] found id: ""
	I0930 21:12:23.368034   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:23.368095   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.372001   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:23.372077   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:23.408203   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:23.408234   73707 cri.go:89] found id: ""
	I0930 21:12:23.408242   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:23.408299   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.412328   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:23.412397   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:23.462142   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:23.462173   73707 cri.go:89] found id: ""
	I0930 21:12:23.462183   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:23.462247   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.466257   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:23.466336   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:23.509075   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:23.509098   73707 cri.go:89] found id: ""
	I0930 21:12:23.509109   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:23.509169   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.513362   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:23.513441   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:23.553711   73707 cri.go:89] found id: ""
	I0930 21:12:23.553738   73707 logs.go:276] 0 containers: []
	W0930 21:12:23.553746   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:23.553752   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:23.553797   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:23.599596   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:23.599629   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:23.599635   73707 cri.go:89] found id: ""
	I0930 21:12:23.599644   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:23.599699   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.603589   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.607827   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:23.607855   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:23.621046   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:23.621069   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:23.664703   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:23.664735   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:23.700614   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:23.700644   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:23.738113   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:23.738143   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:23.775706   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:23.775733   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:23.840419   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:23.840454   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:23.876827   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:23.876860   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:23.943636   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:23.943675   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:24.052729   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:24.052763   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:24.106526   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:24.106556   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:24.146914   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:24.146941   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:24.527753   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:24.527804   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:27.077689   73707 system_pods.go:59] 8 kube-system pods found
	I0930 21:12:27.077721   73707 system_pods.go:61] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running
	I0930 21:12:27.077726   73707 system_pods.go:61] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running
	I0930 21:12:27.077731   73707 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running
	I0930 21:12:27.077739   73707 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running
	I0930 21:12:27.077744   73707 system_pods.go:61] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running
	I0930 21:12:27.077749   73707 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running
	I0930 21:12:27.077759   73707 system_pods.go:61] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:27.077766   73707 system_pods.go:61] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running
	I0930 21:12:27.077774   73707 system_pods.go:74] duration metric: took 3.835107861s to wait for pod list to return data ...
	I0930 21:12:27.077783   73707 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:12:27.082269   73707 default_sa.go:45] found service account: "default"
	I0930 21:12:27.082292   73707 default_sa.go:55] duration metric: took 4.502111ms for default service account to be created ...
	I0930 21:12:27.082299   73707 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:12:27.086738   73707 system_pods.go:86] 8 kube-system pods found
	I0930 21:12:27.086764   73707 system_pods.go:89] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running
	I0930 21:12:27.086770   73707 system_pods.go:89] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running
	I0930 21:12:27.086775   73707 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running
	I0930 21:12:27.086781   73707 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running
	I0930 21:12:27.086784   73707 system_pods.go:89] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running
	I0930 21:12:27.086788   73707 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running
	I0930 21:12:27.086796   73707 system_pods.go:89] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:27.086803   73707 system_pods.go:89] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running
	I0930 21:12:27.086811   73707 system_pods.go:126] duration metric: took 4.506701ms to wait for k8s-apps to be running ...
	I0930 21:12:27.086820   73707 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:12:27.086868   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:27.102286   73707 system_svc.go:56] duration metric: took 15.455734ms WaitForService to wait for kubelet
	I0930 21:12:27.102325   73707 kubeadm.go:582] duration metric: took 4m21.818162682s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:12:27.102346   73707 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:12:27.105332   73707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:12:27.105354   73707 node_conditions.go:123] node cpu capacity is 2
	I0930 21:12:27.105364   73707 node_conditions.go:105] duration metric: took 3.013328ms to run NodePressure ...
	I0930 21:12:27.105375   73707 start.go:241] waiting for startup goroutines ...
	I0930 21:12:27.105382   73707 start.go:246] waiting for cluster config update ...
	I0930 21:12:27.105393   73707 start.go:255] writing updated cluster config ...
	I0930 21:12:27.105669   73707 ssh_runner.go:195] Run: rm -f paused
	I0930 21:12:27.156804   73707 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:12:27.158887   73707 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-291511" cluster and "default" namespace by default
	I0930 21:12:23.336604   73900 out.go:235]   - Booting up control plane ...
	I0930 21:12:23.336747   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:12:23.345737   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:12:23.346784   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:12:23.347559   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:12:23.351009   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:12:25.568654   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:27.569042   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:29.570978   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:32.069065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:34.069347   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:36.568228   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:38.569351   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:40.569552   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:43.069456   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:45.569254   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:47.569647   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:49.569997   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:52.069284   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:54.069870   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:54.563572   73256 pod_ready.go:82] duration metric: took 4m0.000782781s for pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:54.563605   73256 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:54.563620   73256 pod_ready.go:39] duration metric: took 4m9.49309261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:54.563643   73256 kubeadm.go:597] duration metric: took 4m18.399318281s to restartPrimaryControlPlane
	W0930 21:12:54.563698   73256 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 21:12:54.563721   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:13:03.351822   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:13:03.352632   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:03.352833   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:08.353230   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:08.353429   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:20.634441   73256 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.070691776s)
	I0930 21:13:20.634529   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:13:20.650312   73256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:13:20.661782   73256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:13:20.671436   73256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:13:20.671463   73256 kubeadm.go:157] found existing configuration files:
	
	I0930 21:13:20.671504   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:13:20.681860   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:13:20.681934   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:13:20.692529   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:13:20.701507   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:13:20.701585   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:13:20.711211   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:13:20.721856   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:13:20.721928   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:13:20.733194   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:13:20.743887   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:13:20.743955   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:13:20.753546   73256 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:13:20.799739   73256 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 21:13:20.799812   73256 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:13:20.906464   73256 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:13:20.906569   73256 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:13:20.906647   73256 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 21:13:20.919451   73256 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:13:20.921440   73256 out.go:235]   - Generating certificates and keys ...
	I0930 21:13:20.921550   73256 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:13:20.921645   73256 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:13:20.921758   73256 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:13:20.921845   73256 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:13:20.921945   73256 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:13:20.922021   73256 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:13:20.922117   73256 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:13:20.922190   73256 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:13:20.922262   73256 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:13:20.922336   73256 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:13:20.922370   73256 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:13:20.922459   73256 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:13:21.079731   73256 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:13:21.214199   73256 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 21:13:21.344405   73256 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:13:21.605006   73256 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:13:21.718432   73256 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:13:21.718967   73256 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:13:21.723434   73256 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:13:18.354150   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:18.354468   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:21.725304   73256 out.go:235]   - Booting up control plane ...
	I0930 21:13:21.725435   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:13:21.725526   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:13:21.725637   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:13:21.743582   73256 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:13:21.749533   73256 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:13:21.749605   73256 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:13:21.873716   73256 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 21:13:21.873867   73256 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 21:13:22.375977   73256 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.402537ms
	I0930 21:13:22.376098   73256 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 21:13:27.379510   73256 kubeadm.go:310] [api-check] The API server is healthy after 5.001265494s
	I0930 21:13:27.392047   73256 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 21:13:27.409550   73256 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 21:13:27.447693   73256 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 21:13:27.447896   73256 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-256103 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 21:13:27.462338   73256 kubeadm.go:310] [bootstrap-token] Using token: k5ffj3.6sqmy7prwrlhrg7s
	I0930 21:13:27.463967   73256 out.go:235]   - Configuring RBAC rules ...
	I0930 21:13:27.464076   73256 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 21:13:27.472107   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 21:13:27.481172   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 21:13:27.485288   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 21:13:27.492469   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 21:13:27.496822   73256 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 21:13:27.789372   73256 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 21:13:28.210679   73256 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 21:13:28.784869   73256 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 21:13:28.785859   73256 kubeadm.go:310] 
	I0930 21:13:28.785954   73256 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 21:13:28.785967   73256 kubeadm.go:310] 
	I0930 21:13:28.786045   73256 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 21:13:28.786077   73256 kubeadm.go:310] 
	I0930 21:13:28.786121   73256 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 21:13:28.786219   73256 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 21:13:28.786286   73256 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 21:13:28.786304   73256 kubeadm.go:310] 
	I0930 21:13:28.786395   73256 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 21:13:28.786405   73256 kubeadm.go:310] 
	I0930 21:13:28.786464   73256 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 21:13:28.786474   73256 kubeadm.go:310] 
	I0930 21:13:28.786546   73256 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 21:13:28.786658   73256 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 21:13:28.786754   73256 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 21:13:28.786763   73256 kubeadm.go:310] 
	I0930 21:13:28.786870   73256 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 21:13:28.786991   73256 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 21:13:28.787000   73256 kubeadm.go:310] 
	I0930 21:13:28.787122   73256 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k5ffj3.6sqmy7prwrlhrg7s \
	I0930 21:13:28.787240   73256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 21:13:28.787274   73256 kubeadm.go:310] 	--control-plane 
	I0930 21:13:28.787290   73256 kubeadm.go:310] 
	I0930 21:13:28.787415   73256 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 21:13:28.787425   73256 kubeadm.go:310] 
	I0930 21:13:28.787547   73256 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k5ffj3.6sqmy7prwrlhrg7s \
	I0930 21:13:28.787713   73256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 21:13:28.788805   73256 kubeadm.go:310] W0930 21:13:20.776526    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:13:28.789058   73256 kubeadm.go:310] W0930 21:13:20.777323    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:13:28.789158   73256 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:13:28.789178   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:13:28.789187   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:13:28.791049   73256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:13:28.792381   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:13:28.802872   73256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:13:28.819952   73256 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:13:28.820054   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:28.820070   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-256103 minikube.k8s.io/updated_at=2024_09_30T21_13_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=embed-certs-256103 minikube.k8s.io/primary=true
	I0930 21:13:28.859770   73256 ops.go:34] apiserver oom_adj: -16
	I0930 21:13:29.026274   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:29.526992   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:30.026700   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:30.526962   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:31.027165   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:31.526632   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:32.027019   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:32.526522   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:33.026739   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:33.116028   73256 kubeadm.go:1113] duration metric: took 4.296036786s to wait for elevateKubeSystemPrivileges
	I0930 21:13:33.116067   73256 kubeadm.go:394] duration metric: took 4m57.005787187s to StartCluster
	I0930 21:13:33.116088   73256 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:13:33.116175   73256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:13:33.117855   73256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:13:33.118142   73256 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:13:33.118263   73256 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:13:33.118420   73256 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-256103"
	I0930 21:13:33.118373   73256 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:13:33.118446   73256 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-256103"
	I0930 21:13:33.118442   73256 addons.go:69] Setting default-storageclass=true in profile "embed-certs-256103"
	W0930 21:13:33.118453   73256 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:13:33.118464   73256 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-256103"
	I0930 21:13:33.118482   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.118515   73256 addons.go:69] Setting metrics-server=true in profile "embed-certs-256103"
	I0930 21:13:33.118554   73256 addons.go:234] Setting addon metrics-server=true in "embed-certs-256103"
	W0930 21:13:33.118564   73256 addons.go:243] addon metrics-server should already be in state true
	I0930 21:13:33.118594   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.118807   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118840   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.118880   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118926   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.118941   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118965   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.120042   73256 out.go:177] * Verifying Kubernetes components...
	I0930 21:13:33.121706   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:13:33.136554   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0930 21:13:33.137096   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.137304   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0930 21:13:33.137664   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.137696   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.137789   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.138013   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.138176   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.138317   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.138336   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.139163   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37389
	I0930 21:13:33.139176   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.139733   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.139903   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.139955   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.140284   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.140311   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.140780   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.141336   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.141375   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.141814   73256 addons.go:234] Setting addon default-storageclass=true in "embed-certs-256103"
	W0930 21:13:33.141832   73256 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:13:33.141857   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.142143   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.142177   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.161937   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0930 21:13:33.162096   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33657
	I0930 21:13:33.162249   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42531
	I0930 21:13:33.162491   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.162536   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.162837   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.163017   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163028   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163030   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163045   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163254   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163265   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163362   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.163417   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.163864   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.163899   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.164101   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.164154   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.164356   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.166460   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.166673   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.168464   73256 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:13:33.168631   73256 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:13:33.169822   73256 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:13:33.169840   73256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:13:33.169857   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.169937   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:13:33.169947   73256 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:13:33.169963   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.174613   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.174653   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175236   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.175265   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175372   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.175405   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175667   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.176048   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.176051   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.176299   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.176299   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.176476   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.176684   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.176685   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.180520   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I0930 21:13:33.180968   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.181564   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.181588   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.181938   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.182136   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.183803   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.184001   73256 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:13:33.184017   73256 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:13:33.184035   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.186565   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.186964   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.186996   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.187311   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.187481   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.187797   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.187937   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.337289   73256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:13:33.360186   73256 node_ready.go:35] waiting up to 6m0s for node "embed-certs-256103" to be "Ready" ...
	I0930 21:13:33.372799   73256 node_ready.go:49] node "embed-certs-256103" has status "Ready":"True"
	I0930 21:13:33.372828   73256 node_ready.go:38] duration metric: took 12.601736ms for node "embed-certs-256103" to be "Ready" ...
	I0930 21:13:33.372837   73256 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:13:33.379694   73256 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:33.462144   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:13:33.500072   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:13:33.500102   73256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:13:33.524789   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:13:33.548931   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:13:33.548955   73256 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:13:33.604655   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:13:33.604682   73256 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:13:33.648687   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:13:34.533493   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.008666954s)
	I0930 21:13:34.533555   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.533566   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.533856   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.533870   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.533884   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.533892   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.533900   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.534108   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.534126   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.534149   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.535651   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.073475648s)
	I0930 21:13:34.535695   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.535706   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.535926   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.536001   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.536014   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.536030   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.535981   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.537450   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.537470   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.537480   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.564363   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.564394   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.564715   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.564739   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968266   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.319532564s)
	I0930 21:13:34.968330   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.968350   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.968642   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.968665   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968674   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.968673   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.968681   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.968944   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.968969   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968973   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.968979   73256 addons.go:475] Verifying addon metrics-server=true in "embed-certs-256103"
	I0930 21:13:34.970656   73256 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0930 21:13:34.971966   73256 addons.go:510] duration metric: took 1.853709741s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0930 21:13:35.387687   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:37.388374   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:39.886425   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:41.885713   73256 pod_ready.go:93] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.885737   73256 pod_ready.go:82] duration metric: took 8.506004979s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.885746   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.891032   73256 pod_ready.go:93] pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.891052   73256 pod_ready.go:82] duration metric: took 5.300379ms for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.891061   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.895332   73256 pod_ready.go:93] pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.895349   73256 pod_ready.go:82] duration metric: took 4.282199ms for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.895357   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-glbsg" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.899518   73256 pod_ready.go:93] pod "kube-proxy-glbsg" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.899556   73256 pod_ready.go:82] duration metric: took 4.191815ms for pod "kube-proxy-glbsg" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.899567   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.904184   73256 pod_ready.go:93] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.904203   73256 pod_ready.go:82] duration metric: took 4.628533ms for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.904209   73256 pod_ready.go:39] duration metric: took 8.531361398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:13:41.904221   73256 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:13:41.904262   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:13:41.919570   73256 api_server.go:72] duration metric: took 8.801387692s to wait for apiserver process to appear ...
	I0930 21:13:41.919591   73256 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:13:41.919607   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:13:41.923810   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I0930 21:13:41.924633   73256 api_server.go:141] control plane version: v1.31.1
	I0930 21:13:41.924651   73256 api_server.go:131] duration metric: took 5.054857ms to wait for apiserver health ...
	I0930 21:13:41.924659   73256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:13:42.086431   73256 system_pods.go:59] 9 kube-system pods found
	I0930 21:13:42.086468   73256 system_pods.go:61] "coredns-7c65d6cfc9-gt5tt" [165faaf0-866c-4097-9bdb-ed58fe8d7395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.086480   73256 system_pods.go:61] "coredns-7c65d6cfc9-sgsbn" [c97fdb50-c6a0-4ef8-8c01-ea45ed18b72a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.086488   73256 system_pods.go:61] "etcd-embed-certs-256103" [6aac0706-7dbd-4655-b261-68877299d81a] Running
	I0930 21:13:42.086494   73256 system_pods.go:61] "kube-apiserver-embed-certs-256103" [6c8e3157-ec97-4a85-8947-ca7541c19b1c] Running
	I0930 21:13:42.086500   73256 system_pods.go:61] "kube-controller-manager-embed-certs-256103" [1e3f76d1-d343-4127-aad9-8a5a8e589a43] Running
	I0930 21:13:42.086505   73256 system_pods.go:61] "kube-proxy-glbsg" [f68e378f-ce0f-4603-bd8e-93334f04f7a7] Running
	I0930 21:13:42.086510   73256 system_pods.go:61] "kube-scheduler-embed-certs-256103" [29f55c6f-9603-4cd2-a798-0ff2362b7607] Running
	I0930 21:13:42.086518   73256 system_pods.go:61] "metrics-server-6867b74b74-5mhkh" [470424ec-bb66-4d62-904d-0d4ad93fa5bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:13:42.086525   73256 system_pods.go:61] "storage-provisioner" [a07a5a12-7420-4b57-b79d-982f4bb48232] Running
	I0930 21:13:42.086538   73256 system_pods.go:74] duration metric: took 161.870121ms to wait for pod list to return data ...
	I0930 21:13:42.086559   73256 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:13:42.284282   73256 default_sa.go:45] found service account: "default"
	I0930 21:13:42.284307   73256 default_sa.go:55] duration metric: took 197.73827ms for default service account to be created ...
	I0930 21:13:42.284316   73256 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:13:42.486445   73256 system_pods.go:86] 9 kube-system pods found
	I0930 21:13:42.486478   73256 system_pods.go:89] "coredns-7c65d6cfc9-gt5tt" [165faaf0-866c-4097-9bdb-ed58fe8d7395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.486489   73256 system_pods.go:89] "coredns-7c65d6cfc9-sgsbn" [c97fdb50-c6a0-4ef8-8c01-ea45ed18b72a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.486497   73256 system_pods.go:89] "etcd-embed-certs-256103" [6aac0706-7dbd-4655-b261-68877299d81a] Running
	I0930 21:13:42.486503   73256 system_pods.go:89] "kube-apiserver-embed-certs-256103" [6c8e3157-ec97-4a85-8947-ca7541c19b1c] Running
	I0930 21:13:42.486509   73256 system_pods.go:89] "kube-controller-manager-embed-certs-256103" [1e3f76d1-d343-4127-aad9-8a5a8e589a43] Running
	I0930 21:13:42.486513   73256 system_pods.go:89] "kube-proxy-glbsg" [f68e378f-ce0f-4603-bd8e-93334f04f7a7] Running
	I0930 21:13:42.486518   73256 system_pods.go:89] "kube-scheduler-embed-certs-256103" [29f55c6f-9603-4cd2-a798-0ff2362b7607] Running
	I0930 21:13:42.486526   73256 system_pods.go:89] "metrics-server-6867b74b74-5mhkh" [470424ec-bb66-4d62-904d-0d4ad93fa5bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:13:42.486533   73256 system_pods.go:89] "storage-provisioner" [a07a5a12-7420-4b57-b79d-982f4bb48232] Running
	I0930 21:13:42.486542   73256 system_pods.go:126] duration metric: took 202.220435ms to wait for k8s-apps to be running ...
	I0930 21:13:42.486552   73256 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:13:42.486601   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:13:42.501286   73256 system_svc.go:56] duration metric: took 14.699273ms WaitForService to wait for kubelet
	I0930 21:13:42.501315   73256 kubeadm.go:582] duration metric: took 9.38313627s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:13:42.501332   73256 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:13:42.685282   73256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:13:42.685314   73256 node_conditions.go:123] node cpu capacity is 2
	I0930 21:13:42.685326   73256 node_conditions.go:105] duration metric: took 183.989963ms to run NodePressure ...
	I0930 21:13:42.685346   73256 start.go:241] waiting for startup goroutines ...
	I0930 21:13:42.685356   73256 start.go:246] waiting for cluster config update ...
	I0930 21:13:42.685371   73256 start.go:255] writing updated cluster config ...
	I0930 21:13:42.685664   73256 ssh_runner.go:195] Run: rm -f paused
	I0930 21:13:42.734778   73256 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:13:42.736658   73256 out.go:177] * Done! kubectl is now configured to use "embed-certs-256103" cluster and "default" namespace by default
	I0930 21:13:38.355123   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:38.355330   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357098   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:14:18.357396   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357419   73900 kubeadm.go:310] 
	I0930 21:14:18.357473   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:14:18.357541   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:14:18.357554   73900 kubeadm.go:310] 
	I0930 21:14:18.357609   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:14:18.357659   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:14:18.357801   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:14:18.357817   73900 kubeadm.go:310] 
	I0930 21:14:18.357964   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:14:18.357996   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:14:18.358028   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:14:18.358039   73900 kubeadm.go:310] 
	I0930 21:14:18.358174   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:14:18.358318   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:14:18.358331   73900 kubeadm.go:310] 
	I0930 21:14:18.358510   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:14:18.358646   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:14:18.358764   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:14:18.358866   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:14:18.358882   73900 kubeadm.go:310] 
	I0930 21:14:18.359454   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:14:18.359595   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:14:18.359681   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0930 21:14:18.359797   73900 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0930 21:14:18.359841   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:14:18.820244   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:14:18.834938   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:14:18.844779   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:14:18.844803   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:14:18.844856   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:14:18.853738   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:14:18.853811   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:14:18.863366   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:14:18.872108   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:14:18.872164   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:14:18.881818   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.890916   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:14:18.890969   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.900075   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:14:18.908449   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:14:18.908520   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:14:18.917163   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:14:18.983181   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:14:18.983233   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:14:19.121356   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:14:19.121545   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:14:19.121674   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:14:19.306639   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:14:19.309593   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:14:19.309683   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:14:19.309748   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:14:19.309870   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:14:19.309957   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:14:19.310040   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:14:19.310119   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:14:19.310209   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:14:19.310292   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:14:19.310404   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:14:19.310511   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:14:19.310567   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:14:19.310654   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:14:19.453872   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:14:19.621232   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:14:19.797694   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:14:19.886897   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:14:19.909016   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:14:19.910536   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:14:19.910617   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:14:20.052878   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:14:20.054739   73900 out.go:235]   - Booting up control plane ...
	I0930 21:14:20.054881   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:14:20.068419   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:14:20.068512   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:14:20.068697   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:14:20.072015   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:15:00.073988   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:15:00.074795   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:00.075068   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:05.075810   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:05.076061   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:15.076695   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:15.076928   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:35.077652   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:35.077862   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.076816   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:16:15.077063   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.077082   73900 kubeadm.go:310] 
	I0930 21:16:15.077136   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:16:15.077188   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:16:15.077198   73900 kubeadm.go:310] 
	I0930 21:16:15.077246   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:16:15.077298   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:16:15.077425   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:16:15.077442   73900 kubeadm.go:310] 
	I0930 21:16:15.077605   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:16:15.077651   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:16:15.077710   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:16:15.077718   73900 kubeadm.go:310] 
	I0930 21:16:15.077851   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:16:15.077997   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:16:15.078013   73900 kubeadm.go:310] 
	I0930 21:16:15.078143   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:16:15.078229   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:16:15.078309   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:16:15.078419   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:16:15.078431   73900 kubeadm.go:310] 
	I0930 21:16:15.079235   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:16:15.079365   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:16:15.079442   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0930 21:16:15.079572   73900 kubeadm.go:394] duration metric: took 7m56.529269567s to StartCluster
	I0930 21:16:15.079639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:16:15.079713   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:16:15.122057   73900 cri.go:89] found id: ""
	I0930 21:16:15.122086   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.122098   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:16:15.122105   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:16:15.122166   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:16:15.156244   73900 cri.go:89] found id: ""
	I0930 21:16:15.156278   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.156289   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:16:15.156297   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:16:15.156357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:16:15.188952   73900 cri.go:89] found id: ""
	I0930 21:16:15.188977   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.188989   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:16:15.188996   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:16:15.189058   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:16:15.219400   73900 cri.go:89] found id: ""
	I0930 21:16:15.219427   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.219435   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:16:15.219441   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:16:15.219501   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:16:15.252049   73900 cri.go:89] found id: ""
	I0930 21:16:15.252078   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.252086   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:16:15.252093   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:16:15.252150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:16:15.286560   73900 cri.go:89] found id: ""
	I0930 21:16:15.286594   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.286605   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:16:15.286614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:16:15.286679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:16:15.319140   73900 cri.go:89] found id: ""
	I0930 21:16:15.319178   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.319187   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:16:15.319192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:16:15.319245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:16:15.351299   73900 cri.go:89] found id: ""
	I0930 21:16:15.351322   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.351330   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:16:15.351339   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:16:15.351350   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:16:15.402837   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:16:15.402882   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:16:15.417111   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:16:15.417140   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:16:15.492593   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:16:15.492614   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:16:15.492627   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:16:15.621646   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:16:15.621681   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0930 21:16:15.660480   73900 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0930 21:16:15.660528   73900 out.go:270] * 
	W0930 21:16:15.660580   73900 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.660595   73900 out.go:270] * 
	W0930 21:16:15.661387   73900 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 21:16:15.665510   73900 out.go:201] 
	W0930 21:16:15.667332   73900 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.667373   73900 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0930 21:16:15.667390   73900 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0930 21:16:15.668812   73900 out.go:201] 
	
	
	==> CRI-O <==
	Sep 30 21:21:20 no-preload-997816 crio[707]: time="2024-09-30 21:21:20.935424268Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731280935402931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f871bbdf-b1d0-4928-8732-b4fcbfd496bc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:20 no-preload-997816 crio[707]: time="2024-09-30 21:21:20.935863816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b2d7e97-8809-4a1e-94cb-37c441c6c638 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:20 no-preload-997816 crio[707]: time="2024-09-30 21:21:20.935930131Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b2d7e97-8809-4a1e-94cb-37c441c6c638 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:20 no-preload-997816 crio[707]: time="2024-09-30 21:21:20.936176078Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730504093374150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4511e8755902041ec728c39a53350645fb5e31ed150b5935b3ee003b41f711,PodSandboxId:8147142912a2d88a8228bd307f69e3a6540c21d00f4f9618062853f36290d473,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730484441413665,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f0eedc3-2026-4ba3-ac8e-784be7e51dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7,PodSandboxId:33a1a02b5819f89b582185170a53eab5bde7dfdf3a0cb0ea354e7b1a74d9111f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730480935801356,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jg8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46ba2867-485a-4b67-af4b-4de2c607d172,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730473266262316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f,PodSandboxId:54da58cb4856ec108353c10a5a6f612ee192711d6459e265c06fab8a90da9dba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730473268426087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klcv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 133bcd7f-667d-4969-b063-d33e2c8eed
0f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122,PodSandboxId:8fdadebc4632316c6851d6142b4a2951f4e762607a03802501113b27fb76d466,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730468494614060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909537799d377a7b5a56a4a5d684c97d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf,PodSandboxId:a96ee404058b8e9e5bb32c16fe21830aad9d481ffddd18dd8e660f7b77794911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730468514319531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4bbd39434baedeb326d3b6c5f0f
b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c,PodSandboxId:2d56c1daebf60b9201cbc515f8e1565fbdfc630ee552a17c531c57a3b85ad1d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730468486940473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf702a6b765256da0a8cd88a48f902d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c,PodSandboxId:cee67c278b3f721f7d21238705e692223dd134b5ab39c248fc1ee94b239f3c89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730468447781115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9710f8be49235e7e38d661128fa5cb3a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b2d7e97-8809-4a1e-94cb-37c441c6c638 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:20 no-preload-997816 crio[707]: time="2024-09-30 21:21:20.973863083Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c76824f4-82ca-4e66-810f-22b6beb74c42 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:21:20 no-preload-997816 crio[707]: time="2024-09-30 21:21:20.973946823Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c76824f4-82ca-4e66-810f-22b6beb74c42 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:21:20 no-preload-997816 crio[707]: time="2024-09-30 21:21:20.974827870Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d23f2029-529a-4c8d-9091-2d8656f54270 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:20 no-preload-997816 crio[707]: time="2024-09-30 21:21:20.975282118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731280975261672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d23f2029-529a-4c8d-9091-2d8656f54270 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:20 no-preload-997816 crio[707]: time="2024-09-30 21:21:20.975714278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b987d9d-dc75-48e3-97f9-f302277f04bf name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:20 no-preload-997816 crio[707]: time="2024-09-30 21:21:20.975780275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b987d9d-dc75-48e3-97f9-f302277f04bf name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:20 no-preload-997816 crio[707]: time="2024-09-30 21:21:20.975979029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730504093374150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4511e8755902041ec728c39a53350645fb5e31ed150b5935b3ee003b41f711,PodSandboxId:8147142912a2d88a8228bd307f69e3a6540c21d00f4f9618062853f36290d473,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730484441413665,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f0eedc3-2026-4ba3-ac8e-784be7e51dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7,PodSandboxId:33a1a02b5819f89b582185170a53eab5bde7dfdf3a0cb0ea354e7b1a74d9111f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730480935801356,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jg8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46ba2867-485a-4b67-af4b-4de2c607d172,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730473266262316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f,PodSandboxId:54da58cb4856ec108353c10a5a6f612ee192711d6459e265c06fab8a90da9dba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730473268426087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klcv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 133bcd7f-667d-4969-b063-d33e2c8eed
0f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122,PodSandboxId:8fdadebc4632316c6851d6142b4a2951f4e762607a03802501113b27fb76d466,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730468494614060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909537799d377a7b5a56a4a5d684c97d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf,PodSandboxId:a96ee404058b8e9e5bb32c16fe21830aad9d481ffddd18dd8e660f7b77794911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730468514319531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4bbd39434baedeb326d3b6c5f0f
b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c,PodSandboxId:2d56c1daebf60b9201cbc515f8e1565fbdfc630ee552a17c531c57a3b85ad1d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730468486940473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf702a6b765256da0a8cd88a48f902d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c,PodSandboxId:cee67c278b3f721f7d21238705e692223dd134b5ab39c248fc1ee94b239f3c89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730468447781115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9710f8be49235e7e38d661128fa5cb3a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b987d9d-dc75-48e3-97f9-f302277f04bf name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.012522774Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97c98b9a-fa55-44c8-83de-4f82f0e2d7b3 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.012635085Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97c98b9a-fa55-44c8-83de-4f82f0e2d7b3 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.020278185Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=201150c3-5480-4e73-ae0b-d508ac90d866 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.020666695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731281020642984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=201150c3-5480-4e73-ae0b-d508ac90d866 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.021487262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fb1a80c-fc80-4cc9-85f3-4130822a0be6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.021592069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fb1a80c-fc80-4cc9-85f3-4130822a0be6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.021842950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730504093374150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4511e8755902041ec728c39a53350645fb5e31ed150b5935b3ee003b41f711,PodSandboxId:8147142912a2d88a8228bd307f69e3a6540c21d00f4f9618062853f36290d473,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730484441413665,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f0eedc3-2026-4ba3-ac8e-784be7e51dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7,PodSandboxId:33a1a02b5819f89b582185170a53eab5bde7dfdf3a0cb0ea354e7b1a74d9111f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730480935801356,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jg8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46ba2867-485a-4b67-af4b-4de2c607d172,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730473266262316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f,PodSandboxId:54da58cb4856ec108353c10a5a6f612ee192711d6459e265c06fab8a90da9dba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730473268426087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klcv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 133bcd7f-667d-4969-b063-d33e2c8eed
0f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122,PodSandboxId:8fdadebc4632316c6851d6142b4a2951f4e762607a03802501113b27fb76d466,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730468494614060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909537799d377a7b5a56a4a5d684c97d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf,PodSandboxId:a96ee404058b8e9e5bb32c16fe21830aad9d481ffddd18dd8e660f7b77794911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730468514319531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4bbd39434baedeb326d3b6c5f0f
b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c,PodSandboxId:2d56c1daebf60b9201cbc515f8e1565fbdfc630ee552a17c531c57a3b85ad1d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730468486940473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf702a6b765256da0a8cd88a48f902d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c,PodSandboxId:cee67c278b3f721f7d21238705e692223dd134b5ab39c248fc1ee94b239f3c89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730468447781115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9710f8be49235e7e38d661128fa5cb3a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fb1a80c-fc80-4cc9-85f3-4130822a0be6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.054424571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbae9f84-b8c4-4f9f-a0d1-39b5ad013c72 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.054510737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbae9f84-b8c4-4f9f-a0d1-39b5ad013c72 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.055880670Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9626a73-11b7-4768-8b32-7095625cc64d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.056362952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731281056338090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9626a73-11b7-4768-8b32-7095625cc64d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.056963672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be189e6d-55c7-4df7-b08f-992bbeac8ad8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.057096434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be189e6d-55c7-4df7-b08f-992bbeac8ad8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:21 no-preload-997816 crio[707]: time="2024-09-30 21:21:21.057357996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730504093374150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4511e8755902041ec728c39a53350645fb5e31ed150b5935b3ee003b41f711,PodSandboxId:8147142912a2d88a8228bd307f69e3a6540c21d00f4f9618062853f36290d473,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730484441413665,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f0eedc3-2026-4ba3-ac8e-784be7e51dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7,PodSandboxId:33a1a02b5819f89b582185170a53eab5bde7dfdf3a0cb0ea354e7b1a74d9111f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730480935801356,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jg8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46ba2867-485a-4b67-af4b-4de2c607d172,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730473266262316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f,PodSandboxId:54da58cb4856ec108353c10a5a6f612ee192711d6459e265c06fab8a90da9dba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730473268426087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klcv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 133bcd7f-667d-4969-b063-d33e2c8eed
0f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122,PodSandboxId:8fdadebc4632316c6851d6142b4a2951f4e762607a03802501113b27fb76d466,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730468494614060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909537799d377a7b5a56a4a5d684c97d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf,PodSandboxId:a96ee404058b8e9e5bb32c16fe21830aad9d481ffddd18dd8e660f7b77794911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730468514319531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4bbd39434baedeb326d3b6c5f0f
b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c,PodSandboxId:2d56c1daebf60b9201cbc515f8e1565fbdfc630ee552a17c531c57a3b85ad1d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730468486940473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf702a6b765256da0a8cd88a48f902d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c,PodSandboxId:cee67c278b3f721f7d21238705e692223dd134b5ab39c248fc1ee94b239f3c89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730468447781115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9710f8be49235e7e38d661128fa5cb3a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be189e6d-55c7-4df7-b08f-992bbeac8ad8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6dcf5ceb365ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   6aa9b9bcc8918       storage-provisioner
	3e4511e875590       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   8147142912a2d       busybox
	d730f13030b2a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   33a1a02b5819f       coredns-7c65d6cfc9-jg8ph
	a5ce5450390e9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   54da58cb4856e       kube-proxy-klcv8
	298410b231e99       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   6aa9b9bcc8918       storage-provisioner
	1970803994e16       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   a96ee404058b8       kube-controller-manager-no-preload-997816
	249f183de7189       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   8fdadebc46323       kube-apiserver-no-preload-997816
	e7334f6f13787       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   2d56c1daebf60       etcd-no-preload-997816
	438729352d121       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   cee67c278b3f7       kube-scheduler-no-preload-997816
	
	
	==> coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53025 - 44760 "HINFO IN 5467919944529872735.4377248471549316289. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012756936s
	
	
	==> describe nodes <==
	Name:               no-preload-997816
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-997816
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=no-preload-997816
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T20_59_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:59:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-997816
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 21:21:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 21:18:34 +0000   Mon, 30 Sep 2024 20:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 21:18:34 +0000   Mon, 30 Sep 2024 20:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 21:18:34 +0000   Mon, 30 Sep 2024 20:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 21:18:34 +0000   Mon, 30 Sep 2024 21:08:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.93
	  Hostname:    no-preload-997816
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f572d23b1fe74d57b0f24d55888a67b9
	  System UUID:                f572d23b-1fe7-4d57-b0f2-4d55888a67b9
	  Boot ID:                    ebbcf1ad-afb4-49b6-ac01-3dbca546db82
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-jg8ph                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-997816                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-997816             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-997816    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-klcv8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-997816             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-c2wpn              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-997816 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-997816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-997816 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                22m                kubelet          Node no-preload-997816 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-997816 event: Registered Node no-preload-997816 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-997816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-997816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-997816 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-997816 event: Registered Node no-preload-997816 in Controller
	
	
	==> dmesg <==
	[Sep30 21:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051012] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036995] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.763788] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.941108] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.543005] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.202530] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.057053] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054445] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.180442] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.118354] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.298091] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[ +15.219524] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.062318] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.881800] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +5.252218] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.851869] systemd-fstab-generator[1985]: Ignoring "noauto" option for root device
	[  +3.208937] kauditd_printk_skb: 61 callbacks suppressed
	[Sep30 21:08] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] <==
	{"level":"info","ts":"2024-09-30T21:07:58.291168Z","caller":"traceutil/trace.go:171","msg":"trace[1920429232] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:1; response_revision:524; }","duration":"603.902319ms","start":"2024-09-30T21:07:57.687254Z","end":"2024-09-30T21:07:58.291156Z","steps":["trace[1920429232] 'agreement among raft nodes before linearized reading'  (duration: 603.17901ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:07:58.291203Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T21:07:57.687221Z","time spent":"603.96883ms","remote":"127.0.0.1:47810","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":1,"response size":1232,"request content":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" "}
	{"level":"warn","ts":"2024-09-30T21:07:58.519503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.836443ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12513975338625685759 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-no-preload-997816.17fa21b50924431c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-no-preload-997816.17fa21b50924431c\" value_size:762 lease:3290603301770909858 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-30T21:07:58.519645Z","caller":"traceutil/trace.go:171","msg":"trace[717274828] linearizableReadLoop","detail":"{readStateIndex:565; appliedIndex:563; }","duration":"190.717285ms","start":"2024-09-30T21:07:58.328918Z","end":"2024-09-30T21:07:58.519635Z","steps":["trace[717274828] 'read index received'  (duration: 68.797351ms)","trace[717274828] 'applied index is now lower than readState.Index'  (duration: 121.919532ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T21:07:58.519709Z","caller":"traceutil/trace.go:171","msg":"trace[1884059801] transaction","detail":"{read_only:false; response_revision:525; number_of_response:1; }","duration":"219.318571ms","start":"2024-09-30T21:07:58.300382Z","end":"2024-09-30T21:07:58.519701Z","steps":["trace[1884059801] 'process raft request'  (duration: 97.239364ms)","trace[1884059801] 'compare'  (duration: 121.732837ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T21:07:58.519856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.920632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-09-30T21:07:58.522095Z","caller":"traceutil/trace.go:171","msg":"trace[534349170] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:526; }","duration":"193.164263ms","start":"2024-09-30T21:07:58.328913Z","end":"2024-09-30T21:07:58.522078Z","steps":["trace[534349170] 'agreement among raft nodes before linearized reading'  (duration: 190.802175ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:07:58.519976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.498216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregated-metrics-reader\" ","response":"range_response_count:1 size:1488"}
	{"level":"warn","ts":"2024-09-30T21:07:58.520167Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.253455ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-997816\" ","response":"range_response_count:1 size:4645"}
	{"level":"info","ts":"2024-09-30T21:07:58.520339Z","caller":"traceutil/trace.go:171","msg":"trace[1379085076] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"213.920725ms","start":"2024-09-30T21:07:58.306408Z","end":"2024-09-30T21:07:58.520329Z","steps":["trace[1379085076] 'process raft request'  (duration: 213.180469ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T21:07:58.522829Z","caller":"traceutil/trace.go:171","msg":"trace[2057448815] range","detail":"{range_begin:/registry/clusterroles/system:aggregated-metrics-reader; range_end:; response_count:1; response_revision:526; }","duration":"129.348113ms","start":"2024-09-30T21:07:58.393466Z","end":"2024-09-30T21:07:58.522814Z","steps":["trace[2057448815] 'agreement among raft nodes before linearized reading'  (duration: 126.472179ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T21:07:58.523124Z","caller":"traceutil/trace.go:171","msg":"trace[101090542] range","detail":"{range_begin:/registry/minions/no-preload-997816; range_end:; response_count:1; response_revision:526; }","duration":"115.208059ms","start":"2024-09-30T21:07:58.407905Z","end":"2024-09-30T21:07:58.523113Z","steps":["trace[101090542] 'agreement among raft nodes before linearized reading'  (duration: 112.183515ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T21:08:39.440233Z","caller":"traceutil/trace.go:171","msg":"trace[2048793296] transaction","detail":"{read_only:false; response_revision:619; number_of_response:1; }","duration":"572.269489ms","start":"2024-09-30T21:08:38.867918Z","end":"2024-09-30T21:08:39.440187Z","steps":["trace[2048793296] 'process raft request'  (duration: 572.14677ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:08:39.440489Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T21:08:38.867897Z","time spent":"572.505936ms","remote":"127.0.0.1:47654","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4324,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-6867b74b74-c2wpn\" mod_revision:614 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-c2wpn\" value_size:4258 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-c2wpn\" > >"}
	{"level":"info","ts":"2024-09-30T21:08:39.447220Z","caller":"traceutil/trace.go:171","msg":"trace[1426270632] linearizableReadLoop","detail":"{readStateIndex:667; appliedIndex:666; }","duration":"492.167475ms","start":"2024-09-30T21:08:38.955037Z","end":"2024-09-30T21:08:39.447205Z","steps":["trace[1426270632] 'read index received'  (duration: 486.034534ms)","trace[1426270632] 'applied index is now lower than readState.Index'  (duration: 6.132341ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T21:08:39.447247Z","caller":"traceutil/trace.go:171","msg":"trace[1076090188] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"578.70643ms","start":"2024-09-30T21:08:38.868525Z","end":"2024-09-30T21:08:39.447231Z","steps":["trace[1076090188] 'process raft request'  (duration: 578.585467ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:08:39.447375Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T21:08:38.868513Z","time spent":"578.802992ms","remote":"127.0.0.1:47550","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":802,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-6867b74b74-c2wpn.17fa21b787a2abcc\" mod_revision:597 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-c2wpn.17fa21b787a2abcc\" value_size:707 lease:3290603301770909858 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-c2wpn.17fa21b787a2abcc\" > >"}
	{"level":"warn","ts":"2024-09-30T21:08:39.447456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"492.456016ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-c2wpn\" ","response":"range_response_count:1 size:4339"}
	{"level":"info","ts":"2024-09-30T21:08:39.447501Z","caller":"traceutil/trace.go:171","msg":"trace[1320089805] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-c2wpn; range_end:; response_count:1; response_revision:620; }","duration":"492.509273ms","start":"2024-09-30T21:08:38.954985Z","end":"2024-09-30T21:08:39.447495Z","steps":["trace[1320089805] 'agreement among raft nodes before linearized reading'  (duration: 492.288459ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:08:39.447556Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T21:08:38.954951Z","time spent":"492.598285ms","remote":"127.0.0.1:47654","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4361,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-c2wpn\" "}
	{"level":"warn","ts":"2024-09-30T21:08:39.447566Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"465.383363ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T21:08:39.447983Z","caller":"traceutil/trace.go:171","msg":"trace[1759551358] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:620; }","duration":"465.799156ms","start":"2024-09-30T21:08:38.982176Z","end":"2024-09-30T21:08:39.447975Z","steps":["trace[1759551358] 'agreement among raft nodes before linearized reading'  (duration: 465.373988ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T21:17:50.750354Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":839}
	{"level":"info","ts":"2024-09-30T21:17:50.762227Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":839,"took":"11.016676ms","hash":3124043443,"current-db-size-bytes":2764800,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2764800,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-09-30T21:17:50.762359Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3124043443,"revision":839,"compact-revision":-1}
	
	
	==> kernel <==
	 21:21:21 up 14 min,  0 users,  load average: 0.04, 0.15, 0.12
	Linux no-preload-997816 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] <==
	W0930 21:17:53.433318       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:17:53.433773       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0930 21:17:53.435278       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:17:53.435362       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 21:18:53.436568       1 handler_proxy.go:99] no RequestInfo found in the context
	W0930 21:18:53.436783       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:18:53.436784       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0930 21:18:53.436854       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0930 21:18:53.438071       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:18:53.438082       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 21:20:53.438826       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:20:53.439045       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0930 21:20:53.438976       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:20:53.439127       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0930 21:20:53.440254       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:20:53.440297       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] <==
	E0930 21:15:56.049938       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:15:56.490647       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:16:26.056291       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:16:26.498694       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:16:56.062403       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:16:56.508396       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:17:26.068559       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:17:26.516271       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:17:56.081656       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:17:56.523973       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:18:26.088763       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:18:26.533265       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0930 21:18:34.843540       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-997816"
	E0930 21:18:56.095784       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:18:56.541830       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0930 21:19:01.877022       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="240.228µs"
	I0930 21:19:15.874781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="197.698µs"
	E0930 21:19:26.102187       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:19:26.549434       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:19:56.112489       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:19:56.558600       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:20:26.120371       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:20:26.565411       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:20:56.127695       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:20:56.574320       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 21:07:53.718933       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 21:07:53.744146       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.93"]
	E0930 21:07:53.744240       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 21:07:53.782382       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 21:07:53.782467       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 21:07:53.782492       1 server_linux.go:169] "Using iptables Proxier"
	I0930 21:07:53.785794       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 21:07:53.786641       1 server.go:483] "Version info" version="v1.31.1"
	I0930 21:07:53.786714       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 21:07:53.790873       1 config.go:199] "Starting service config controller"
	I0930 21:07:53.791413       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 21:07:53.791476       1 config.go:328] "Starting node config controller"
	I0930 21:07:53.791484       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 21:07:53.792307       1 config.go:105] "Starting endpoint slice config controller"
	I0930 21:07:53.796799       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 21:07:53.892150       1 shared_informer.go:320] Caches are synced for service config
	I0930 21:07:53.892250       1 shared_informer.go:320] Caches are synced for node config
	I0930 21:07:53.897706       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] <==
	I0930 21:07:50.648341       1 serving.go:386] Generated self-signed cert in-memory
	W0930 21:07:52.340044       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 21:07:52.340207       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 21:07:52.340299       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 21:07:52.340334       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 21:07:52.448521       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 21:07:52.449042       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 21:07:52.457582       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 21:07:52.457892       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 21:07:52.458121       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 21:07:52.458634       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 21:07:52.564671       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 21:20:10 no-preload-997816 kubelet[1362]: E0930 21:20:10.858045    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c2wpn" podUID="2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82"
	Sep 30 21:20:18 no-preload-997816 kubelet[1362]: E0930 21:20:18.076836    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731218076494075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:18 no-preload-997816 kubelet[1362]: E0930 21:20:18.077290    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731218076494075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:23 no-preload-997816 kubelet[1362]: E0930 21:20:23.859629    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c2wpn" podUID="2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82"
	Sep 30 21:20:28 no-preload-997816 kubelet[1362]: E0930 21:20:28.079423    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731228079062343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:28 no-preload-997816 kubelet[1362]: E0930 21:20:28.079449    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731228079062343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:37 no-preload-997816 kubelet[1362]: E0930 21:20:37.858882    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c2wpn" podUID="2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82"
	Sep 30 21:20:38 no-preload-997816 kubelet[1362]: E0930 21:20:38.081964    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731238081383967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:38 no-preload-997816 kubelet[1362]: E0930 21:20:38.082058    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731238081383967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:47 no-preload-997816 kubelet[1362]: E0930 21:20:47.892392    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 21:20:47 no-preload-997816 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 21:20:47 no-preload-997816 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 21:20:47 no-preload-997816 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 21:20:47 no-preload-997816 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 21:20:48 no-preload-997816 kubelet[1362]: E0930 21:20:48.083581    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731248083267626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:48 no-preload-997816 kubelet[1362]: E0930 21:20:48.083623    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731248083267626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:48 no-preload-997816 kubelet[1362]: E0930 21:20:48.858251    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c2wpn" podUID="2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82"
	Sep 30 21:20:58 no-preload-997816 kubelet[1362]: E0930 21:20:58.085465    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731258085046307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:58 no-preload-997816 kubelet[1362]: E0930 21:20:58.085788    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731258085046307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:03 no-preload-997816 kubelet[1362]: E0930 21:21:03.858719    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c2wpn" podUID="2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82"
	Sep 30 21:21:08 no-preload-997816 kubelet[1362]: E0930 21:21:08.088219    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731268087740081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:08 no-preload-997816 kubelet[1362]: E0930 21:21:08.088260    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731268087740081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:15 no-preload-997816 kubelet[1362]: E0930 21:21:15.858822    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c2wpn" podUID="2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82"
	Sep 30 21:21:18 no-preload-997816 kubelet[1362]: E0930 21:21:18.090102    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731278089420218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:18 no-preload-997816 kubelet[1362]: E0930 21:21:18.090460    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731278089420218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] <==
	I0930 21:07:53.490486       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0930 21:08:23.494094       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] <==
	I0930 21:08:24.187080       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 21:08:24.197839       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 21:08:24.197914       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 21:08:41.595770       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 21:08:41.595909       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-997816_e88cb4cb-9add-4a3c-a8e3-f398658279d5!
	I0930 21:08:41.596361       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a28e79c-ce2e-4eb8-a175-ad56e6ab22b2", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-997816_e88cb4cb-9add-4a3c-a8e3-f398658279d5 became leader
	I0930 21:08:41.696101       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-997816_e88cb4cb-9add-4a3c-a8e3-f398658279d5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997816 -n no-preload-997816
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-997816 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-c2wpn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-997816 describe pod metrics-server-6867b74b74-c2wpn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-997816 describe pod metrics-server-6867b74b74-c2wpn: exit status 1 (62.90113ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-c2wpn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-997816 describe pod metrics-server-6867b74b74-c2wpn: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0930 21:12:51.838843   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:13:08.419820   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:13:15.484711   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:13:28.936159   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-291511 -n default-k8s-diff-port-291511
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-30 21:21:27.682162637 +0000 UTC m=+6208.386918766
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-291511 -n default-k8s-diff-port-291511
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-291511 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-291511 logs -n 25: (2.130908567s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo find                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo crio                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-207733                                      | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-741890 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | disable-driver-mounts-741890                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 21:00 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-256103            | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-997816             | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-291511  | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-621406        | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256103                 | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-997816                  | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-291511       | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:12 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-621406             | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 21:03:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 21:03:42.750102   73900 out.go:345] Setting OutFile to fd 1 ...
	I0930 21:03:42.750367   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750377   73900 out.go:358] Setting ErrFile to fd 2...
	I0930 21:03:42.750383   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750578   73900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 21:03:42.751109   73900 out.go:352] Setting JSON to false
	I0930 21:03:42.752040   73900 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6366,"bootTime":1727723857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 21:03:42.752140   73900 start.go:139] virtualization: kvm guest
	I0930 21:03:42.754146   73900 out.go:177] * [old-k8s-version-621406] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 21:03:42.755446   73900 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 21:03:42.755456   73900 notify.go:220] Checking for updates...
	I0930 21:03:42.758261   73900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 21:03:42.759566   73900 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:03:42.760907   73900 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 21:03:42.762342   73900 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 21:03:42.763561   73900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 21:03:42.765356   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:03:42.765773   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.765822   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.780605   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45071
	I0930 21:03:42.781022   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.781550   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.781583   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.781912   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.782160   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.784603   73900 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 21:03:42.785760   73900 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 21:03:42.786115   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.786156   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.800937   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0930 21:03:42.801409   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.801882   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.801905   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.802216   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.802397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.838423   73900 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 21:03:42.839832   73900 start.go:297] selected driver: kvm2
	I0930 21:03:42.839847   73900 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.839953   73900 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 21:03:42.840605   73900 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.840667   73900 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 21:03:42.856119   73900 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 21:03:42.856550   73900 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:03:42.856580   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:03:42.856630   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:03:42.856665   73900 start.go:340] cluster config:
	{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.856778   73900 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.858732   73900 out.go:177] * Starting "old-k8s-version-621406" primary control-plane node in "old-k8s-version-621406" cluster
	I0930 21:03:42.859876   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:03:42.859912   73900 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 21:03:42.859929   73900 cache.go:56] Caching tarball of preloaded images
	I0930 21:03:42.860020   73900 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 21:03:42.860031   73900 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0930 21:03:42.860153   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:03:42.860340   73900 start.go:360] acquireMachinesLock for old-k8s-version-621406: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:03:44.619810   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:47.691872   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:53.771838   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:56.843848   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:02.923822   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:05.995871   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:12.075814   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:15.147854   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:21.227790   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:24.299842   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:30.379801   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:33.451787   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:39.531808   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:42.603838   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:48.683904   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:51.755939   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:57.835834   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:00.907789   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:06.987875   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:10.059892   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:16.139832   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:19.211908   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:25.291812   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:28.363915   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:34.443827   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:37.515928   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:43.595824   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:46.667934   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:52.747851   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:55.819883   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:01.899789   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:04.971946   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:11.051812   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:14.123833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:20.203805   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:23.275875   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:29.355806   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:32.427931   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:38.507837   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:41.579909   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:47.659786   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:50.731827   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:56.811833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:59.883878   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:05.963833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:09.035828   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:12.040058   73375 start.go:364] duration metric: took 4m26.951572628s to acquireMachinesLock for "no-preload-997816"
	I0930 21:07:12.040115   73375 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:12.040126   73375 fix.go:54] fixHost starting: 
	I0930 21:07:12.040448   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:12.040485   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:12.057054   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0930 21:07:12.057624   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:12.058143   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:12.058173   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:12.058523   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:12.058739   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:12.058873   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:12.060479   73375 fix.go:112] recreateIfNeeded on no-preload-997816: state=Stopped err=<nil>
	I0930 21:07:12.060499   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	W0930 21:07:12.060640   73375 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:12.062653   73375 out.go:177] * Restarting existing kvm2 VM for "no-preload-997816" ...
	I0930 21:07:12.037683   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:12.037732   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:07:12.038031   73256 buildroot.go:166] provisioning hostname "embed-certs-256103"
	I0930 21:07:12.038055   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:07:12.038234   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:07:12.039910   73256 machine.go:96] duration metric: took 4m37.42208497s to provisionDockerMachine
	I0930 21:07:12.039954   73256 fix.go:56] duration metric: took 4m37.444804798s for fixHost
	I0930 21:07:12.039962   73256 start.go:83] releasing machines lock for "embed-certs-256103", held for 4m37.444833727s
	W0930 21:07:12.039989   73256 start.go:714] error starting host: provision: host is not running
	W0930 21:07:12.040104   73256 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0930 21:07:12.040116   73256 start.go:729] Will try again in 5 seconds ...
	I0930 21:07:12.063941   73375 main.go:141] libmachine: (no-preload-997816) Calling .Start
	I0930 21:07:12.064167   73375 main.go:141] libmachine: (no-preload-997816) Ensuring networks are active...
	I0930 21:07:12.065080   73375 main.go:141] libmachine: (no-preload-997816) Ensuring network default is active
	I0930 21:07:12.065489   73375 main.go:141] libmachine: (no-preload-997816) Ensuring network mk-no-preload-997816 is active
	I0930 21:07:12.065993   73375 main.go:141] libmachine: (no-preload-997816) Getting domain xml...
	I0930 21:07:12.066923   73375 main.go:141] libmachine: (no-preload-997816) Creating domain...
	I0930 21:07:13.297091   73375 main.go:141] libmachine: (no-preload-997816) Waiting to get IP...
	I0930 21:07:13.297965   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.298386   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.298473   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.298370   74631 retry.go:31] will retry after 312.032565ms: waiting for machine to come up
	I0930 21:07:13.612088   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.612583   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.612607   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.612519   74631 retry.go:31] will retry after 292.985742ms: waiting for machine to come up
	I0930 21:07:13.907355   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.907794   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.907817   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.907754   74631 retry.go:31] will retry after 451.618632ms: waiting for machine to come up
	I0930 21:07:14.361536   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:14.361990   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:14.362054   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:14.361947   74631 retry.go:31] will retry after 599.246635ms: waiting for machine to come up
	I0930 21:07:14.962861   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:14.963341   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:14.963369   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:14.963294   74631 retry.go:31] will retry after 748.726096ms: waiting for machine to come up
	I0930 21:07:17.040758   73256 start.go:360] acquireMachinesLock for embed-certs-256103: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:07:15.713258   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:15.713576   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:15.713601   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:15.713525   74631 retry.go:31] will retry after 907.199669ms: waiting for machine to come up
	I0930 21:07:16.622784   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:16.623275   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:16.623307   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:16.623211   74631 retry.go:31] will retry after 744.978665ms: waiting for machine to come up
	I0930 21:07:17.369735   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:17.370206   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:17.370231   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:17.370154   74631 retry.go:31] will retry after 1.238609703s: waiting for machine to come up
	I0930 21:07:18.610618   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:18.610967   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:18.610989   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:18.610928   74631 retry.go:31] will retry after 1.354775356s: waiting for machine to come up
	I0930 21:07:19.967473   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:19.967892   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:19.967916   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:19.967851   74631 retry.go:31] will retry after 2.26449082s: waiting for machine to come up
	I0930 21:07:22.234066   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:22.234514   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:22.234536   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:22.234474   74631 retry.go:31] will retry after 2.728158374s: waiting for machine to come up
	I0930 21:07:24.966375   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:24.966759   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:24.966782   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:24.966724   74631 retry.go:31] will retry after 3.119117729s: waiting for machine to come up
	I0930 21:07:29.336238   73707 start.go:364] duration metric: took 3m58.92874513s to acquireMachinesLock for "default-k8s-diff-port-291511"
	I0930 21:07:29.336327   73707 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:29.336347   73707 fix.go:54] fixHost starting: 
	I0930 21:07:29.336726   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:29.336779   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:29.354404   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0930 21:07:29.354848   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:29.355331   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:07:29.355352   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:29.355882   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:29.356081   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:29.356249   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:07:29.358109   73707 fix.go:112] recreateIfNeeded on default-k8s-diff-port-291511: state=Stopped err=<nil>
	I0930 21:07:29.358155   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	W0930 21:07:29.358336   73707 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:29.361072   73707 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-291511" ...
	I0930 21:07:28.087153   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.087604   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has current primary IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.087636   73375 main.go:141] libmachine: (no-preload-997816) Found IP for machine: 192.168.61.93
	I0930 21:07:28.087644   73375 main.go:141] libmachine: (no-preload-997816) Reserving static IP address...
	I0930 21:07:28.088047   73375 main.go:141] libmachine: (no-preload-997816) Reserved static IP address: 192.168.61.93
	I0930 21:07:28.088068   73375 main.go:141] libmachine: (no-preload-997816) Waiting for SSH to be available...
	I0930 21:07:28.088090   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "no-preload-997816", mac: "52:54:00:cb:3d:73", ip: "192.168.61.93"} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.088158   73375 main.go:141] libmachine: (no-preload-997816) DBG | skip adding static IP to network mk-no-preload-997816 - found existing host DHCP lease matching {name: "no-preload-997816", mac: "52:54:00:cb:3d:73", ip: "192.168.61.93"}
	I0930 21:07:28.088181   73375 main.go:141] libmachine: (no-preload-997816) DBG | Getting to WaitForSSH function...
	I0930 21:07:28.090195   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.090522   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.090547   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.090722   73375 main.go:141] libmachine: (no-preload-997816) DBG | Using SSH client type: external
	I0930 21:07:28.090739   73375 main.go:141] libmachine: (no-preload-997816) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa (-rw-------)
	I0930 21:07:28.090767   73375 main.go:141] libmachine: (no-preload-997816) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.93 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:07:28.090787   73375 main.go:141] libmachine: (no-preload-997816) DBG | About to run SSH command:
	I0930 21:07:28.090801   73375 main.go:141] libmachine: (no-preload-997816) DBG | exit 0
	I0930 21:07:28.211669   73375 main.go:141] libmachine: (no-preload-997816) DBG | SSH cmd err, output: <nil>: 
	I0930 21:07:28.212073   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetConfigRaw
	I0930 21:07:28.212714   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:28.215442   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.215934   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.215951   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.216186   73375 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/config.json ...
	I0930 21:07:28.216370   73375 machine.go:93] provisionDockerMachine start ...
	I0930 21:07:28.216386   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:28.216575   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.218963   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.219423   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.219455   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.219604   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.219770   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.219948   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.220057   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.220252   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.220441   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.220452   73375 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:07:28.315814   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:07:28.315853   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.316131   73375 buildroot.go:166] provisioning hostname "no-preload-997816"
	I0930 21:07:28.316161   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.316372   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.319253   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.319506   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.319548   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.319711   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.319903   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.320057   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.320182   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.320383   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.320592   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.320606   73375 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-997816 && echo "no-preload-997816" | sudo tee /etc/hostname
	I0930 21:07:28.433652   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-997816
	
	I0930 21:07:28.433686   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.436989   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.437350   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.437389   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.437611   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.437784   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.437957   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.438075   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.438267   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.438487   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.438512   73375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-997816' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-997816/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-997816' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:07:28.544056   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:28.544088   73375 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:07:28.544112   73375 buildroot.go:174] setting up certificates
	I0930 21:07:28.544122   73375 provision.go:84] configureAuth start
	I0930 21:07:28.544135   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.544418   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:28.546960   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.547363   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.547384   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.547570   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.549918   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.550325   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.550353   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.550535   73375 provision.go:143] copyHostCerts
	I0930 21:07:28.550612   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:07:28.550627   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:07:28.550711   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:07:28.550804   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:07:28.550812   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:07:28.550837   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:07:28.550893   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:07:28.550900   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:07:28.550920   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:07:28.550967   73375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.no-preload-997816 san=[127.0.0.1 192.168.61.93 localhost minikube no-preload-997816]
	I0930 21:07:28.744306   73375 provision.go:177] copyRemoteCerts
	I0930 21:07:28.744364   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:07:28.744386   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.747024   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.747368   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.747401   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.747615   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.747813   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.747973   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.748133   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:28.825616   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0930 21:07:28.849513   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:07:28.872666   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:07:28.895673   73375 provision.go:87] duration metric: took 351.536833ms to configureAuth
	I0930 21:07:28.895708   73375 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:07:28.895896   73375 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:28.895975   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.898667   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.899067   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.899098   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.899324   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.899567   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.899703   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.899829   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.899946   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.900120   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.900134   73375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:07:29.113855   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:07:29.113877   73375 machine.go:96] duration metric: took 897.495238ms to provisionDockerMachine
	I0930 21:07:29.113887   73375 start.go:293] postStartSetup for "no-preload-997816" (driver="kvm2")
	I0930 21:07:29.113897   73375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:07:29.113921   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.114220   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:07:29.114254   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.117274   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.117619   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.117663   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.117816   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.118010   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.118159   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.118289   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.197962   73375 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:07:29.202135   73375 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:07:29.202166   73375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:07:29.202237   73375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:07:29.202321   73375 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:07:29.202406   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:07:29.211693   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:29.234503   73375 start.go:296] duration metric: took 120.601484ms for postStartSetup
	I0930 21:07:29.234582   73375 fix.go:56] duration metric: took 17.194433455s for fixHost
	I0930 21:07:29.234610   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.237134   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.237544   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.237574   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.237728   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.237912   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.238085   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.238199   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.238348   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:29.238506   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:29.238515   73375 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:07:29.336092   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730449.310327649
	
	I0930 21:07:29.336114   73375 fix.go:216] guest clock: 1727730449.310327649
	I0930 21:07:29.336123   73375 fix.go:229] Guest: 2024-09-30 21:07:29.310327649 +0000 UTC Remote: 2024-09-30 21:07:29.234588814 +0000 UTC m=+284.288095935 (delta=75.738835ms)
	I0930 21:07:29.336147   73375 fix.go:200] guest clock delta is within tolerance: 75.738835ms
	I0930 21:07:29.336153   73375 start.go:83] releasing machines lock for "no-preload-997816", held for 17.296055752s
	I0930 21:07:29.336194   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.336478   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:29.339488   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.339864   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.339909   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.340070   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340525   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340697   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340800   73375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:07:29.340836   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.340930   73375 ssh_runner.go:195] Run: cat /version.json
	I0930 21:07:29.340955   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.343579   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.343941   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.343976   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344010   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344228   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.344405   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.344441   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.344471   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344543   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.344616   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.344689   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.344784   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.344966   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.345105   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.420949   73375 ssh_runner.go:195] Run: systemctl --version
	I0930 21:07:29.465854   73375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:07:29.616360   73375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:07:29.624522   73375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:07:29.624604   73375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:07:29.642176   73375 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:07:29.642202   73375 start.go:495] detecting cgroup driver to use...
	I0930 21:07:29.642279   73375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:07:29.657878   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:07:29.674555   73375 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:07:29.674614   73375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:07:29.690953   73375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:07:29.705425   73375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:07:29.814602   73375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:07:29.957009   73375 docker.go:233] disabling docker service ...
	I0930 21:07:29.957091   73375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:07:29.971419   73375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:07:29.362775   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Start
	I0930 21:07:29.363023   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring networks are active...
	I0930 21:07:29.364071   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring network default is active
	I0930 21:07:29.364456   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring network mk-default-k8s-diff-port-291511 is active
	I0930 21:07:29.364940   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Getting domain xml...
	I0930 21:07:29.365759   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Creating domain...
	I0930 21:07:29.987509   73375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:07:30.112952   73375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:07:30.239945   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:07:30.253298   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:07:30.271687   73375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:07:30.271768   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.282267   73375 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:07:30.282339   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.292776   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.303893   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.315002   73375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:07:30.326410   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.336951   73375 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.356016   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.367847   73375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:07:30.378650   73375 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:07:30.378703   73375 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:07:30.391768   73375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:07:30.401887   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:30.534771   73375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:07:30.622017   73375 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:07:30.622087   73375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:07:30.627221   73375 start.go:563] Will wait 60s for crictl version
	I0930 21:07:30.627294   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:30.633071   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:07:30.675743   73375 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:07:30.675830   73375 ssh_runner.go:195] Run: crio --version
	I0930 21:07:30.703470   73375 ssh_runner.go:195] Run: crio --version
	I0930 21:07:30.732424   73375 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:07:30.733714   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:30.737016   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:30.737380   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:30.737421   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:30.737690   73375 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0930 21:07:30.741714   73375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:30.754767   73375 kubeadm.go:883] updating cluster {Name:no-preload-997816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:07:30.754892   73375 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:07:30.754941   73375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:30.794489   73375 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:07:30.794516   73375 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 21:07:30.794605   73375 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:30.794624   73375 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:30.794653   73375 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:30.794694   73375 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:30.794733   73375 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:30.794691   73375 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:30.794822   73375 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:30.794836   73375 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0930 21:07:30.796508   73375 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:30.796521   73375 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:30.796538   73375 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:30.796543   73375 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:30.796610   73375 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:30.796616   73375 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:30.796611   73375 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0930 21:07:30.796665   73375 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.018683   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0930 21:07:31.028097   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.117252   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.131998   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.136871   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.140418   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.170883   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.171059   73375 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0930 21:07:31.171098   73375 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.171142   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.172908   73375 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0930 21:07:31.172951   73375 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.172994   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.242489   73375 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0930 21:07:31.242541   73375 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.242609   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.246685   73375 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0930 21:07:31.246731   73375 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.246758   73375 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0930 21:07:31.246778   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.246794   73375 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.246837   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.270923   73375 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0930 21:07:31.270971   73375 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.271024   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.271030   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.271100   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.271109   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.271207   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.271269   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.387993   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.388011   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.388044   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.388091   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.388150   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.388230   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.523098   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.523156   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.523300   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.523344   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.523467   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.623696   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0930 21:07:31.623759   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.623778   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0930 21:07:31.623794   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:31.623869   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:31.632927   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0930 21:07:31.633014   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.633117   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.633206   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0930 21:07:31.633269   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:31.648925   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0930 21:07:31.648945   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.648983   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.676886   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0930 21:07:31.676925   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0930 21:07:31.709210   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0930 21:07:31.709287   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0930 21:07:31.709331   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:31.709394   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:31.709330   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0930 21:07:32.112418   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:33.634620   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.985614953s)
	I0930 21:07:33.634656   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0930 21:07:33.634702   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.925342294s)
	I0930 21:07:33.634716   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:33.634731   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0930 21:07:33.634771   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.925359685s)
	I0930 21:07:33.634779   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:33.634782   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0930 21:07:33.634853   73375 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.522405881s)
	I0930 21:07:33.634891   73375 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0930 21:07:33.634913   73375 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:33.634961   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:30.643828   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting to get IP...
	I0930 21:07:30.644936   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.645382   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.645484   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:30.645381   74769 retry.go:31] will retry after 216.832119ms: waiting for machine to come up
	I0930 21:07:30.863953   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.864583   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.864614   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:30.864518   74769 retry.go:31] will retry after 280.448443ms: waiting for machine to come up
	I0930 21:07:31.147184   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.147792   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.147826   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.147728   74769 retry.go:31] will retry after 345.517763ms: waiting for machine to come up
	I0930 21:07:31.495391   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.495819   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.495841   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.495786   74769 retry.go:31] will retry after 457.679924ms: waiting for machine to come up
	I0930 21:07:31.955479   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.955943   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.955974   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.955897   74769 retry.go:31] will retry after 562.95605ms: waiting for machine to come up
	I0930 21:07:32.520890   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:32.521339   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:32.521368   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:32.521285   74769 retry.go:31] will retry after 743.560182ms: waiting for machine to come up
	I0930 21:07:33.266407   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:33.266914   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:33.266941   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:33.266853   74769 retry.go:31] will retry after 947.444427ms: waiting for machine to come up
	I0930 21:07:34.216195   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:34.216705   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:34.216731   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:34.216659   74769 retry.go:31] will retry after 1.186059526s: waiting for machine to come up
	I0930 21:07:35.714633   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.079826486s)
	I0930 21:07:35.714667   73375 ssh_runner.go:235] Completed: which crictl: (2.079690884s)
	I0930 21:07:35.714721   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:35.714670   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0930 21:07:35.714786   73375 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:35.714821   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:35.753242   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:39.088354   73375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.335055656s)
	I0930 21:07:39.088395   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.373547177s)
	I0930 21:07:39.088422   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0930 21:07:39.088458   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:39.088536   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:39.088459   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:35.404773   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:35.405334   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:35.405359   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:35.405225   74769 retry.go:31] will retry after 1.575803783s: waiting for machine to come up
	I0930 21:07:36.983196   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:36.983730   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:36.983759   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:36.983677   74769 retry.go:31] will retry after 2.020561586s: waiting for machine to come up
	I0930 21:07:39.006915   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:39.007304   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:39.007334   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:39.007269   74769 retry.go:31] will retry after 2.801421878s: waiting for machine to come up
	I0930 21:07:41.074012   73375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.985398095s)
	I0930 21:07:41.074061   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0930 21:07:41.074154   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.985588774s)
	I0930 21:07:41.074183   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0930 21:07:41.074202   73375 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:41.074244   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:41.074166   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:42.972016   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.897745882s)
	I0930 21:07:42.972055   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0930 21:07:42.972083   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.8977868s)
	I0930 21:07:42.972110   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0930 21:07:42.972086   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:42.972155   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:44.835190   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.863005436s)
	I0930 21:07:44.835237   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0930 21:07:44.835263   73375 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:44.835334   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:41.810719   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:41.811099   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:41.811117   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:41.811050   74769 retry.go:31] will retry after 2.703489988s: waiting for machine to come up
	I0930 21:07:44.515949   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:44.516329   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:44.516356   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:44.516276   74769 retry.go:31] will retry after 4.001267434s: waiting for machine to come up
	I0930 21:07:49.889033   73900 start.go:364] duration metric: took 4m7.028659379s to acquireMachinesLock for "old-k8s-version-621406"
	I0930 21:07:49.889104   73900 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:49.889111   73900 fix.go:54] fixHost starting: 
	I0930 21:07:49.889542   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:49.889600   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:49.906767   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0930 21:07:49.907283   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:49.907856   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:07:49.907889   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:49.908203   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:49.908397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:07:49.908542   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetState
	I0930 21:07:49.910270   73900 fix.go:112] recreateIfNeeded on old-k8s-version-621406: state=Stopped err=<nil>
	I0930 21:07:49.910306   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	W0930 21:07:49.910441   73900 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:49.912646   73900 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-621406" ...
	I0930 21:07:45.483728   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0930 21:07:45.483778   73375 cache_images.go:123] Successfully loaded all cached images
	I0930 21:07:45.483785   73375 cache_images.go:92] duration metric: took 14.689240439s to LoadCachedImages
	I0930 21:07:45.483799   73375 kubeadm.go:934] updating node { 192.168.61.93 8443 v1.31.1 crio true true} ...
	I0930 21:07:45.483898   73375 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-997816 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:07:45.483977   73375 ssh_runner.go:195] Run: crio config
	I0930 21:07:45.529537   73375 cni.go:84] Creating CNI manager for ""
	I0930 21:07:45.529558   73375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:45.529567   73375 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:07:45.529591   73375 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.93 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-997816 NodeName:no-preload-997816 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:07:45.529713   73375 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-997816"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.93
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.93"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:07:45.529775   73375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:07:45.540251   73375 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:07:45.540323   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:07:45.549622   73375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0930 21:07:45.565425   73375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:07:45.580646   73375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0930 21:07:45.596216   73375 ssh_runner.go:195] Run: grep 192.168.61.93	control-plane.minikube.internal$ /etc/hosts
	I0930 21:07:45.604940   73375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.93	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:45.620809   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:45.751327   73375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:45.768664   73375 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816 for IP: 192.168.61.93
	I0930 21:07:45.768687   73375 certs.go:194] generating shared ca certs ...
	I0930 21:07:45.768702   73375 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:45.768896   73375 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:07:45.768953   73375 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:07:45.768967   73375 certs.go:256] generating profile certs ...
	I0930 21:07:45.769081   73375 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/client.key
	I0930 21:07:45.769188   73375 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.key.c7192a03
	I0930 21:07:45.769251   73375 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.key
	I0930 21:07:45.769422   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:07:45.769468   73375 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:07:45.769483   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:07:45.769527   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:07:45.769569   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:07:45.769603   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:07:45.769672   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:45.770679   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:07:45.809391   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:07:45.837624   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:07:45.878472   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:07:45.909163   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 21:07:45.950655   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 21:07:45.974391   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:07:45.997258   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 21:07:46.019976   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:07:46.042828   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:07:46.066625   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:07:46.089639   73375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:07:46.106202   73375 ssh_runner.go:195] Run: openssl version
	I0930 21:07:46.111810   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:07:46.122379   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.126659   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.126699   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.132363   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:07:46.143074   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:07:46.154060   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.158542   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.158602   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.164210   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:07:46.175160   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:07:46.186326   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.190782   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.190856   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.196356   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:07:46.206957   73375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:07:46.211650   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:07:46.217398   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:07:46.223566   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:07:46.230204   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:07:46.236404   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:07:46.242282   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:07:46.248591   73375 kubeadm.go:392] StartCluster: {Name:no-preload-997816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:07:46.248686   73375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:07:46.248731   73375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:46.292355   73375 cri.go:89] found id: ""
	I0930 21:07:46.292435   73375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:07:46.303578   73375 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:07:46.303598   73375 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:07:46.303668   73375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:07:46.314544   73375 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:07:46.315643   73375 kubeconfig.go:125] found "no-preload-997816" server: "https://192.168.61.93:8443"
	I0930 21:07:46.318243   73375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:07:46.329751   73375 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.93
	I0930 21:07:46.329781   73375 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:07:46.329791   73375 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:07:46.329837   73375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:46.364302   73375 cri.go:89] found id: ""
	I0930 21:07:46.364392   73375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:07:46.384616   73375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:07:46.395855   73375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:07:46.395875   73375 kubeadm.go:157] found existing configuration files:
	
	I0930 21:07:46.395915   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:07:46.405860   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:07:46.405918   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:07:46.416618   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:07:46.426654   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:07:46.426712   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:07:46.435880   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:07:46.446273   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:07:46.446346   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:07:46.457099   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:07:46.467322   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:07:46.467386   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:07:46.477809   73375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:07:46.489024   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:46.605127   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.509287   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.708716   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.780830   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.883843   73375 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:07:47.883940   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.384688   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.884008   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.925804   73375 api_server.go:72] duration metric: took 1.041960261s to wait for apiserver process to appear ...
	I0930 21:07:48.925833   73375 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:07:48.925857   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:48.521282   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.521838   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Found IP for machine: 192.168.50.2
	I0930 21:07:48.521864   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Reserving static IP address...
	I0930 21:07:48.521876   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has current primary IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.522306   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Reserved static IP address: 192.168.50.2
	I0930 21:07:48.522349   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-291511", mac: "52:54:00:27:46:45", ip: "192.168.50.2"} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.522361   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for SSH to be available...
	I0930 21:07:48.522401   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | skip adding static IP to network mk-default-k8s-diff-port-291511 - found existing host DHCP lease matching {name: "default-k8s-diff-port-291511", mac: "52:54:00:27:46:45", ip: "192.168.50.2"}
	I0930 21:07:48.522427   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Getting to WaitForSSH function...
	I0930 21:07:48.525211   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.525641   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.525667   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.525827   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Using SSH client type: external
	I0930 21:07:48.525854   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa (-rw-------)
	I0930 21:07:48.525883   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:07:48.525900   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | About to run SSH command:
	I0930 21:07:48.525913   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | exit 0
	I0930 21:07:48.655656   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | SSH cmd err, output: <nil>: 
	I0930 21:07:48.656045   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetConfigRaw
	I0930 21:07:48.656789   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:48.659902   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.660358   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.660395   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.660586   73707 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/config.json ...
	I0930 21:07:48.660842   73707 machine.go:93] provisionDockerMachine start ...
	I0930 21:07:48.660866   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:48.661063   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.663782   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.664138   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.664165   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.664318   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.664567   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.664733   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.664868   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.665036   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.665283   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.665315   73707 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:07:48.776382   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:07:48.776414   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:48.776676   73707 buildroot.go:166] provisioning hostname "default-k8s-diff-port-291511"
	I0930 21:07:48.776711   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:48.776913   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.779952   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.780470   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.780516   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.780594   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.780773   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.780925   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.781080   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.781253   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.781457   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.781473   73707 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-291511 && echo "default-k8s-diff-port-291511" | sudo tee /etc/hostname
	I0930 21:07:48.913633   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-291511
	
	I0930 21:07:48.913724   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.916869   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.917280   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.917319   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.917501   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.917715   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.917882   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.918117   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.918296   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.918533   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.918562   73707 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-291511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-291511/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-291511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:07:49.048106   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:49.048141   73707 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:07:49.048182   73707 buildroot.go:174] setting up certificates
	I0930 21:07:49.048198   73707 provision.go:84] configureAuth start
	I0930 21:07:49.048212   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:49.048498   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:49.051299   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.051665   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.051702   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.051837   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.054211   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.054512   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.054540   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.054691   73707 provision.go:143] copyHostCerts
	I0930 21:07:49.054774   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:07:49.054789   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:07:49.054866   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:07:49.054982   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:07:49.054994   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:07:49.055021   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:07:49.055097   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:07:49.055106   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:07:49.055130   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:07:49.055189   73707 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-291511 san=[127.0.0.1 192.168.50.2 default-k8s-diff-port-291511 localhost minikube]
	I0930 21:07:49.239713   73707 provision.go:177] copyRemoteCerts
	I0930 21:07:49.239771   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:07:49.239796   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.242146   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.242468   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.242500   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.242663   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.242834   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.242982   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.243200   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.329405   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:07:49.358036   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0930 21:07:49.385742   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:07:49.409436   73707 provision.go:87] duration metric: took 361.22398ms to configureAuth
	I0930 21:07:49.409493   73707 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:07:49.409696   73707 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:49.409798   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.412572   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.412935   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.412975   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.413266   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.413476   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.413680   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.413821   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.414009   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:49.414199   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:49.414223   73707 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:07:49.635490   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:07:49.635553   73707 machine.go:96] duration metric: took 974.696002ms to provisionDockerMachine
	I0930 21:07:49.635567   73707 start.go:293] postStartSetup for "default-k8s-diff-port-291511" (driver="kvm2")
	I0930 21:07:49.635580   73707 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:07:49.635603   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.635954   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:07:49.635989   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.638867   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.639304   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.639340   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.639413   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.639631   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.639837   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.639995   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.728224   73707 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:07:49.732558   73707 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:07:49.732590   73707 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:07:49.732679   73707 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:07:49.732769   73707 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:07:49.732869   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:07:49.742783   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:49.766585   73707 start.go:296] duration metric: took 131.002562ms for postStartSetup
	I0930 21:07:49.766629   73707 fix.go:56] duration metric: took 20.430290493s for fixHost
	I0930 21:07:49.766652   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.769724   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.770143   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.770172   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.770461   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.770708   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.770872   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.771099   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.771240   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:49.771616   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:49.771636   73707 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:07:49.888863   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730469.865719956
	
	I0930 21:07:49.888889   73707 fix.go:216] guest clock: 1727730469.865719956
	I0930 21:07:49.888900   73707 fix.go:229] Guest: 2024-09-30 21:07:49.865719956 +0000 UTC Remote: 2024-09-30 21:07:49.76663417 +0000 UTC m=+259.507652750 (delta=99.085786ms)
	I0930 21:07:49.888943   73707 fix.go:200] guest clock delta is within tolerance: 99.085786ms
	I0930 21:07:49.888950   73707 start.go:83] releasing machines lock for "default-k8s-diff-port-291511", held for 20.552679126s
	I0930 21:07:49.888982   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.889242   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:49.892424   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.892817   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.892854   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.893030   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893601   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893780   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893852   73707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:07:49.893932   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.893934   73707 ssh_runner.go:195] Run: cat /version.json
	I0930 21:07:49.893985   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.896733   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.896843   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897130   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.897179   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897216   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.897233   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897471   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.897478   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.897679   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.897686   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.897825   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.897834   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.897954   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.898097   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:50.022951   73707 ssh_runner.go:195] Run: systemctl --version
	I0930 21:07:50.029177   73707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:07:50.186430   73707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:07:50.193205   73707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:07:50.193277   73707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:07:50.211330   73707 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:07:50.211365   73707 start.go:495] detecting cgroup driver to use...
	I0930 21:07:50.211430   73707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:07:50.227255   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:07:50.241404   73707 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:07:50.241468   73707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:07:50.257879   73707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:07:50.274595   73707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:07:50.394354   73707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:07:50.567503   73707 docker.go:233] disabling docker service ...
	I0930 21:07:50.567582   73707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:07:50.584390   73707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:07:50.600920   73707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:07:50.742682   73707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:07:50.882835   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:07:50.898340   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:07:50.919395   73707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:07:50.919464   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.930773   73707 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:07:50.930846   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.941870   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.952633   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.964281   73707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:07:50.977410   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.988423   73707 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:51.016091   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:51.027473   73707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:07:51.037470   73707 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:07:51.037537   73707 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:07:51.056841   73707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:07:51.068163   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:51.205357   73707 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:07:51.305327   73707 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:07:51.305410   73707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:07:51.311384   73707 start.go:563] Will wait 60s for crictl version
	I0930 21:07:51.311448   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:07:51.315965   73707 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:07:51.369329   73707 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:07:51.369417   73707 ssh_runner.go:195] Run: crio --version
	I0930 21:07:51.399897   73707 ssh_runner.go:195] Run: crio --version
	I0930 21:07:51.431075   73707 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:07:49.914747   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .Start
	I0930 21:07:49.914948   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring networks are active...
	I0930 21:07:49.915796   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network default is active
	I0930 21:07:49.916225   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network mk-old-k8s-version-621406 is active
	I0930 21:07:49.916890   73900 main.go:141] libmachine: (old-k8s-version-621406) Getting domain xml...
	I0930 21:07:49.917688   73900 main.go:141] libmachine: (old-k8s-version-621406) Creating domain...
	I0930 21:07:51.277867   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting to get IP...
	I0930 21:07:51.279001   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.279451   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.279552   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.279437   74917 retry.go:31] will retry after 307.582619ms: waiting for machine to come up
	I0930 21:07:51.589030   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.589414   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.589445   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.589368   74917 retry.go:31] will retry after 370.683214ms: waiting for machine to come up
	I0930 21:07:51.961914   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.962474   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.962511   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.962415   74917 retry.go:31] will retry after 428.703419ms: waiting for machine to come up
	I0930 21:07:52.393154   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.393682   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.393750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.393673   74917 retry.go:31] will retry after 514.254023ms: waiting for machine to come up
	I0930 21:07:52.334804   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:07:52.334846   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:07:52.334863   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.377601   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:07:52.377632   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:07:52.426784   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.473771   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:52.473811   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:52.926391   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.945122   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:52.945154   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:53.426295   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:53.434429   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:53.434464   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:53.926642   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:53.931501   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0930 21:07:53.940069   73375 api_server.go:141] control plane version: v1.31.1
	I0930 21:07:53.940104   73375 api_server.go:131] duration metric: took 5.014262318s to wait for apiserver health ...
	I0930 21:07:53.940115   73375 cni.go:84] Creating CNI manager for ""
	I0930 21:07:53.940123   73375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:53.941879   73375 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:07:53.943335   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:07:53.959585   73375 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:07:53.996310   73375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:07:54.010070   73375 system_pods.go:59] 8 kube-system pods found
	I0930 21:07:54.010129   73375 system_pods.go:61] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:07:54.010142   73375 system_pods.go:61] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:07:54.010157   73375 system_pods.go:61] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:07:54.010174   73375 system_pods.go:61] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:07:54.010186   73375 system_pods.go:61] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:07:54.010200   73375 system_pods.go:61] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:07:54.010212   73375 system_pods.go:61] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:07:54.010223   73375 system_pods.go:61] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:07:54.010232   73375 system_pods.go:74] duration metric: took 13.897885ms to wait for pod list to return data ...
	I0930 21:07:54.010244   73375 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:07:54.019651   73375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:07:54.019683   73375 node_conditions.go:123] node cpu capacity is 2
	I0930 21:07:54.019697   73375 node_conditions.go:105] duration metric: took 9.446744ms to run NodePressure ...
	I0930 21:07:54.019719   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:54.314348   73375 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:07:54.319583   73375 kubeadm.go:739] kubelet initialised
	I0930 21:07:54.319613   73375 kubeadm.go:740] duration metric: took 5.232567ms waiting for restarted kubelet to initialise ...
	I0930 21:07:54.319625   73375 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:07:54.326866   73375 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.333592   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.333628   73375 pod_ready.go:82] duration metric: took 6.72431ms for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.333640   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.333651   73375 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.340155   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "etcd-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.340194   73375 pod_ready.go:82] duration metric: took 6.533127ms for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.340208   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "etcd-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.340216   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.346494   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-apiserver-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.346530   73375 pod_ready.go:82] duration metric: took 6.304143ms for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.346542   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-apiserver-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.346551   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.403699   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.403731   73375 pod_ready.go:82] duration metric: took 57.168471ms for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.403743   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.403752   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.800372   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-proxy-klcv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.800410   73375 pod_ready.go:82] duration metric: took 396.646883ms for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.800423   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-proxy-klcv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.800432   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:51.432761   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:51.436278   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:51.436659   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:51.436700   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:51.436931   73707 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0930 21:07:51.441356   73707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:51.454358   73707 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-291511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:07:51.454484   73707 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:07:51.454547   73707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:51.502072   73707 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:07:51.502143   73707 ssh_runner.go:195] Run: which lz4
	I0930 21:07:51.506458   73707 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:07:51.510723   73707 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:07:51.510756   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 21:07:52.792488   73707 crio.go:462] duration metric: took 1.286075452s to copy over tarball
	I0930 21:07:52.792580   73707 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:07:55.207282   73707 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.414661305s)
	I0930 21:07:55.207314   73707 crio.go:469] duration metric: took 2.414793514s to extract the tarball
	I0930 21:07:55.207321   73707 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:07:55.244001   73707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:55.287097   73707 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 21:07:55.287124   73707 cache_images.go:84] Images are preloaded, skipping loading
	I0930 21:07:55.287133   73707 kubeadm.go:934] updating node { 192.168.50.2 8444 v1.31.1 crio true true} ...
	I0930 21:07:55.287277   73707 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-291511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:07:55.287384   73707 ssh_runner.go:195] Run: crio config
	I0930 21:07:55.200512   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-scheduler-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.200559   73375 pod_ready.go:82] duration metric: took 400.11341ms for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:55.200569   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-scheduler-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.200577   73375 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:55.601008   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.601042   73375 pod_ready.go:82] duration metric: took 400.453601ms for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:55.601055   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.601065   73375 pod_ready.go:39] duration metric: took 1.281429189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:07:55.601086   73375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:07:55.617767   73375 ops.go:34] apiserver oom_adj: -16
	I0930 21:07:55.617791   73375 kubeadm.go:597] duration metric: took 9.314187459s to restartPrimaryControlPlane
	I0930 21:07:55.617803   73375 kubeadm.go:394] duration metric: took 9.369220314s to StartCluster
	I0930 21:07:55.617824   73375 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.617913   73375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:07:55.619455   73375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.619760   73375 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:07:55.619842   73375 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:07:55.619959   73375 addons.go:69] Setting storage-provisioner=true in profile "no-preload-997816"
	I0930 21:07:55.619984   73375 addons.go:234] Setting addon storage-provisioner=true in "no-preload-997816"
	I0930 21:07:55.619974   73375 addons.go:69] Setting default-storageclass=true in profile "no-preload-997816"
	I0930 21:07:55.620003   73375 addons.go:69] Setting metrics-server=true in profile "no-preload-997816"
	I0930 21:07:55.620009   73375 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-997816"
	I0930 21:07:55.620020   73375 addons.go:234] Setting addon metrics-server=true in "no-preload-997816"
	W0930 21:07:55.620031   73375 addons.go:243] addon metrics-server should already be in state true
	I0930 21:07:55.620050   73375 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:55.620061   73375 host.go:66] Checking if "no-preload-997816" exists ...
	W0930 21:07:55.619994   73375 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:07:55.620124   73375 host.go:66] Checking if "no-preload-997816" exists ...
	I0930 21:07:55.620420   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620459   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.620494   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620535   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.620593   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620634   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.621682   73375 out.go:177] * Verifying Kubernetes components...
	I0930 21:07:55.623102   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:55.643690   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0930 21:07:55.643895   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35545
	I0930 21:07:55.644411   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.644553   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.644968   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.644981   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.645072   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.645078   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.645314   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.645502   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.645732   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.645777   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.645812   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.649244   73375 addons.go:234] Setting addon default-storageclass=true in "no-preload-997816"
	W0930 21:07:55.649262   73375 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:07:55.649283   73375 host.go:66] Checking if "no-preload-997816" exists ...
	I0930 21:07:55.649524   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.649548   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.671077   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42635
	I0930 21:07:55.671558   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.672193   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.672212   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.672505   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45163
	I0930 21:07:55.672736   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.672808   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I0930 21:07:55.673354   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.673396   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.673920   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.673926   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.674528   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.674545   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.674974   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.675624   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.675658   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.676078   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.676095   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.676547   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.676724   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.679115   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.681410   73375 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:55.688953   73375 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:07:55.688981   73375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:07:55.689015   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.693338   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.693996   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.694023   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.694212   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.694344   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.694444   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.694545   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.696037   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0930 21:07:55.696535   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.697185   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.697207   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.697567   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.697772   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.699797   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.700998   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0930 21:07:55.701429   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.702094   73375 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:07:52.909622   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.910169   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.910202   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.910132   74917 retry.go:31] will retry after 605.019848ms: waiting for machine to come up
	I0930 21:07:53.517276   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:53.517911   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:53.517943   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:53.517858   74917 retry.go:31] will retry after 856.018614ms: waiting for machine to come up
	I0930 21:07:54.376343   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:54.376838   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:54.376862   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:54.376794   74917 retry.go:31] will retry after 740.749778ms: waiting for machine to come up
	I0930 21:07:55.119090   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:55.119631   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:55.119660   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:55.119583   74917 retry.go:31] will retry after 1.444139076s: waiting for machine to come up
	I0930 21:07:56.566261   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:56.566744   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:56.566771   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:56.566695   74917 retry.go:31] will retry after 1.681362023s: waiting for machine to come up
	I0930 21:07:55.703687   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:07:55.703709   73375 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:07:55.703736   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.703788   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.703816   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.704295   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.704553   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.707029   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.707365   73375 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:07:55.707385   73375 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:07:55.707408   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.708091   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.708606   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.708629   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.709024   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.709237   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.709388   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.709573   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.711123   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.711607   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.711631   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.711987   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.712178   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.712318   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.712469   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.888447   73375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:55.912060   73375 node_ready.go:35] waiting up to 6m0s for node "no-preload-997816" to be "Ready" ...
	I0930 21:07:56.010903   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:07:56.012576   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:07:56.012601   73375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:07:56.038592   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:07:56.055481   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:07:56.055513   73375 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:07:56.131820   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:07:56.131844   73375 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:07:56.213605   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:07:57.078385   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.067447636s)
	I0930 21:07:57.078439   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.078451   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.078770   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.078823   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:57.078836   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.078845   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.078793   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:57.079118   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:57.079149   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.079157   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:57.672706   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.672737   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.673053   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.673072   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:58.301165   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:59.072488   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.858837368s)
	I0930 21:07:59.072565   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.072582   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.072921   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.072986   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073029   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073038   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.073221   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.034599023s)
	I0930 21:07:59.073271   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073344   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.073383   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073397   73375 addons.go:475] Verifying addon metrics-server=true in "no-preload-997816"
	I0930 21:07:59.073347   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.073754   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:59.073804   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.073819   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073834   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073846   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.075323   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:59.075329   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.075353   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.077687   73375 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0930 21:07:59.079278   73375 addons.go:510] duration metric: took 3.459453938s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0930 21:07:55.346656   73707 cni.go:84] Creating CNI manager for ""
	I0930 21:07:55.346679   73707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:55.346688   73707 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:07:55.346718   73707 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-291511 NodeName:default-k8s-diff-port-291511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:07:55.346847   73707 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-291511"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:07:55.346903   73707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:07:55.356645   73707 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:07:55.356708   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:07:55.366457   73707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0930 21:07:55.384639   73707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:07:55.403208   73707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0930 21:07:55.421878   73707 ssh_runner.go:195] Run: grep 192.168.50.2	control-plane.minikube.internal$ /etc/hosts
	I0930 21:07:55.425803   73707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:55.439370   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:55.553575   73707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:55.570754   73707 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511 for IP: 192.168.50.2
	I0930 21:07:55.570787   73707 certs.go:194] generating shared ca certs ...
	I0930 21:07:55.570808   73707 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.571011   73707 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:07:55.571067   73707 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:07:55.571083   73707 certs.go:256] generating profile certs ...
	I0930 21:07:55.571178   73707 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/client.key
	I0930 21:07:55.571270   73707 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.key.2e3224d9
	I0930 21:07:55.571326   73707 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.key
	I0930 21:07:55.571464   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:07:55.571510   73707 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:07:55.571522   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:07:55.571587   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:07:55.571627   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:07:55.571655   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:07:55.571719   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:55.572367   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:07:55.606278   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:07:55.645629   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:07:55.690514   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:07:55.737445   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 21:07:55.773656   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 21:07:55.804015   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:07:55.830210   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:07:55.857601   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:07:55.887765   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:07:55.922053   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:07:55.951040   73707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:07:55.969579   73707 ssh_runner.go:195] Run: openssl version
	I0930 21:07:55.975576   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:07:55.987255   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:07:55.993657   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:07:55.993723   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:07:56.001878   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:07:56.017528   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:07:56.030398   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.035552   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.035625   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.043878   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:07:56.055384   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:07:56.066808   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.073099   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.073164   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.081343   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:07:56.096669   73707 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:07:56.102635   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:07:56.110805   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:07:56.118533   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:07:56.125800   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:07:56.133985   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:07:56.142109   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:07:56.150433   73707 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-291511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:07:56.150538   73707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:07:56.150608   73707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:56.197936   73707 cri.go:89] found id: ""
	I0930 21:07:56.198016   73707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:07:56.208133   73707 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:07:56.208155   73707 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:07:56.208204   73707 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:07:56.218880   73707 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:07:56.220322   73707 kubeconfig.go:125] found "default-k8s-diff-port-291511" server: "https://192.168.50.2:8444"
	I0930 21:07:56.223557   73707 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:07:56.233844   73707 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.2
	I0930 21:07:56.233876   73707 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:07:56.233889   73707 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:07:56.233970   73707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:56.280042   73707 cri.go:89] found id: ""
	I0930 21:07:56.280129   73707 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:07:56.304291   73707 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:07:56.317987   73707 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:07:56.318012   73707 kubeadm.go:157] found existing configuration files:
	
	I0930 21:07:56.318076   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0930 21:07:56.331377   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:07:56.331448   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:07:56.342380   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0930 21:07:56.354949   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:07:56.355030   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:07:56.368385   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0930 21:07:56.378798   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:07:56.378883   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:07:56.390167   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0930 21:07:56.400338   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:07:56.400413   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:07:56.410735   73707 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:07:56.426910   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:56.557126   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.682738   73707 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125574645s)
	I0930 21:07:57.682777   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.908684   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.983925   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:58.088822   73707 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:07:58.088930   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:58.589565   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:59.089483   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:59.110240   73707 api_server.go:72] duration metric: took 1.021416929s to wait for apiserver process to appear ...
	I0930 21:07:59.110279   73707 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:07:59.110328   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:07:59.110843   73707 api_server.go:269] stopped: https://192.168.50.2:8444/healthz: Get "https://192.168.50.2:8444/healthz": dial tcp 192.168.50.2:8444: connect: connection refused
	I0930 21:07:59.611045   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:07:58.250468   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:58.251041   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:58.251062   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:58.250979   74917 retry.go:31] will retry after 2.260492343s: waiting for machine to come up
	I0930 21:08:00.513613   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:00.514129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:00.514194   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:00.514117   74917 retry.go:31] will retry after 2.449694064s: waiting for machine to come up
	I0930 21:08:02.200888   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:02.200918   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:02.200930   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:02.240477   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:02.240513   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:02.611111   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:02.615548   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:02.615578   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:03.111216   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:03.118078   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:03.118102   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:03.610614   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:03.615203   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 200:
	ok
	I0930 21:08:03.621652   73707 api_server.go:141] control plane version: v1.31.1
	I0930 21:08:03.621680   73707 api_server.go:131] duration metric: took 4.511393989s to wait for apiserver health ...
	I0930 21:08:03.621689   73707 cni.go:84] Creating CNI manager for ""
	I0930 21:08:03.621694   73707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:03.624026   73707 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:08:00.416356   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:08:02.416469   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:08:02.916643   73375 node_ready.go:49] node "no-preload-997816" has status "Ready":"True"
	I0930 21:08:02.916668   73375 node_ready.go:38] duration metric: took 7.004576501s for node "no-preload-997816" to be "Ready" ...
	I0930 21:08:02.916679   73375 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:02.922833   73375 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:02.928873   73375 pod_ready.go:93] pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:02.928895   73375 pod_ready.go:82] duration metric: took 6.034388ms for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:02.928904   73375 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.934668   73375 pod_ready.go:103] pod "etcd-no-preload-997816" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:03.625416   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:08:03.640241   73707 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:08:03.664231   73707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:08:03.679372   73707 system_pods.go:59] 8 kube-system pods found
	I0930 21:08:03.679409   73707 system_pods.go:61] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:08:03.679425   73707 system_pods.go:61] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:08:03.679435   73707 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:08:03.679447   73707 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:08:03.679456   73707 system_pods.go:61] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 21:08:03.679466   73707 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:08:03.679472   73707 system_pods.go:61] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:08:03.679482   73707 system_pods.go:61] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 21:08:03.679490   73707 system_pods.go:74] duration metric: took 15.234407ms to wait for pod list to return data ...
	I0930 21:08:03.679509   73707 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:08:03.698332   73707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:08:03.698363   73707 node_conditions.go:123] node cpu capacity is 2
	I0930 21:08:03.698374   73707 node_conditions.go:105] duration metric: took 18.857709ms to run NodePressure ...
	I0930 21:08:03.698394   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:03.968643   73707 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:08:03.974075   73707 kubeadm.go:739] kubelet initialised
	I0930 21:08:03.974098   73707 kubeadm.go:740] duration metric: took 5.424573ms waiting for restarted kubelet to initialise ...
	I0930 21:08:03.974105   73707 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:03.982157   73707 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:03.989298   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.989329   73707 pod_ready.go:82] duration metric: took 7.140381ms for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:03.989338   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.989345   73707 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:03.995739   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.995773   73707 pod_ready.go:82] duration metric: took 6.418854ms for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:03.995787   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.995797   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.002071   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.002093   73707 pod_ready.go:82] duration metric: took 6.287919ms for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.002104   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.002110   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.071732   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.071760   73707 pod_ready.go:82] duration metric: took 69.643681ms for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.071771   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.071777   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.468580   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-proxy-kwp22" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.468605   73707 pod_ready.go:82] duration metric: took 396.820558ms for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.468614   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-proxy-kwp22" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.468620   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.868042   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.868067   73707 pod_ready.go:82] duration metric: took 399.438278ms for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.868078   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.868085   73707 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.267893   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:05.267925   73707 pod_ready.go:82] duration metric: took 399.831615ms for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:05.267937   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:05.267945   73707 pod_ready.go:39] duration metric: took 1.293832472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:05.267960   73707 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:08:05.282162   73707 ops.go:34] apiserver oom_adj: -16
	I0930 21:08:05.282188   73707 kubeadm.go:597] duration metric: took 9.074027172s to restartPrimaryControlPlane
	I0930 21:08:05.282199   73707 kubeadm.go:394] duration metric: took 9.131777336s to StartCluster
	I0930 21:08:05.282216   73707 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:05.282338   73707 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:08:05.283862   73707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:05.284135   73707 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:08:05.284201   73707 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:08:05.284287   73707 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284305   73707 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.284313   73707 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:08:05.284340   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.284339   73707 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284385   73707 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:08:05.284399   73707 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-291511"
	I0930 21:08:05.284359   73707 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284432   73707 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.284448   73707 addons.go:243] addon metrics-server should already be in state true
	I0930 21:08:05.284486   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.284739   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284760   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284784   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.284794   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.284890   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284931   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.286020   73707 out.go:177] * Verifying Kubernetes components...
	I0930 21:08:05.287268   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:05.302045   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0930 21:08:05.302587   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.303190   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.303219   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.303631   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.304213   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.304258   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.304484   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0930 21:08:05.304676   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0930 21:08:05.304884   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.305175   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.305353   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.305377   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.305642   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.305660   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.305724   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.305933   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.306016   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.306580   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.306623   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.309757   73707 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.309778   73707 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:08:05.309805   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.310163   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.310208   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.320335   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0930 21:08:05.320928   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.321496   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.321520   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.321922   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.322082   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.324111   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.325867   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0930 21:08:05.325879   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0930 21:08:05.326252   73707 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:08:05.326337   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.326280   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.326847   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.326862   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.326982   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.326999   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.327239   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.327313   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.327467   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:08:05.327485   73707 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:08:05.327507   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.327597   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.327778   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.327806   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.329862   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.331454   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.331654   73707 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:05.331959   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.331996   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.332184   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.332355   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.332577   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.332699   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.332956   73707 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:08:05.332972   73707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:08:05.332990   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.336234   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.336634   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.336661   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.336885   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.337134   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.337271   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.337447   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.345334   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34613
	I0930 21:08:05.345908   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.346393   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.346424   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.346749   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.346887   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.348836   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.349033   73707 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:08:05.349048   73707 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:08:05.349067   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.351835   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.352222   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.352277   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.352401   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.352644   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.352786   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.352886   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.475274   73707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:05.496035   73707 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-291511" to be "Ready" ...
	I0930 21:08:05.564715   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:08:05.574981   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:08:05.575006   73707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:08:05.613799   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:08:05.613822   73707 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:08:05.618503   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:08:05.689563   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:08:05.689588   73707 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:08:05.769327   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:08:06.831657   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266911261s)
	I0930 21:08:06.831717   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.831727   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.831735   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.213199657s)
	I0930 21:08:06.831780   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.831797   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832054   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832071   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832079   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.832086   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832146   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832164   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832182   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832195   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.832203   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832291   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832305   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832316   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832477   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832483   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832512   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.838509   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.838534   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.838786   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.838801   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.838806   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.956747   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.187373699s)
	I0930 21:08:06.956803   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.956819   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.957097   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.958516   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.958531   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.958542   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.958548   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.958842   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.958863   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.958873   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.958875   73707 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-291511"
	I0930 21:08:06.961299   73707 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0930 21:08:02.965767   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:02.966135   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:02.966157   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:02.966086   74917 retry.go:31] will retry after 2.951226221s: waiting for machine to come up
	I0930 21:08:05.919389   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:05.919894   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:05.919937   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:05.919827   74917 retry.go:31] will retry after 2.747969391s: waiting for machine to come up
	I0930 21:08:09.916514   73256 start.go:364] duration metric: took 52.875691449s to acquireMachinesLock for "embed-certs-256103"
	I0930 21:08:09.916583   73256 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:08:09.916592   73256 fix.go:54] fixHost starting: 
	I0930 21:08:09.916972   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:09.917000   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:09.935009   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0930 21:08:09.935493   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:09.936052   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:08:09.936073   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:09.936443   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:09.936617   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:09.936762   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:08:09.938608   73256 fix.go:112] recreateIfNeeded on embed-certs-256103: state=Stopped err=<nil>
	I0930 21:08:09.938639   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	W0930 21:08:09.938811   73256 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:08:09.940789   73256 out.go:177] * Restarting existing kvm2 VM for "embed-certs-256103" ...
	I0930 21:08:05.936626   73375 pod_ready.go:93] pod "etcd-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:05.936660   73375 pod_ready.go:82] duration metric: took 3.007747597s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.936674   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.942154   73375 pod_ready.go:93] pod "kube-apiserver-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:05.942196   73375 pod_ready.go:82] duration metric: took 5.502965ms for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.942209   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.949366   73375 pod_ready.go:93] pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.949402   73375 pod_ready.go:82] duration metric: took 1.007183809s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.949413   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.955060   73375 pod_ready.go:93] pod "kube-proxy-klcv8" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.955088   73375 pod_ready.go:82] duration metric: took 5.667172ms for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.955100   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.961684   73375 pod_ready.go:93] pod "kube-scheduler-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.961706   73375 pod_ready.go:82] duration metric: took 6.597856ms for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.961718   73375 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:08.967525   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:06.962594   73707 addons.go:510] duration metric: took 1.678396512s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0930 21:08:07.499805   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:09.500771   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:08.671179   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.671686   73900 main.go:141] libmachine: (old-k8s-version-621406) Found IP for machine: 192.168.72.159
	I0930 21:08:08.671711   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserving static IP address...
	I0930 21:08:08.671729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has current primary IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.672178   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.672220   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | skip adding static IP to network mk-old-k8s-version-621406 - found existing host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"}
	I0930 21:08:08.672231   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserved static IP address: 192.168.72.159
	I0930 21:08:08.672246   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting for SSH to be available...
	I0930 21:08:08.672254   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Getting to WaitForSSH function...
	I0930 21:08:08.674566   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.674931   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.674969   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.675128   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH client type: external
	I0930 21:08:08.675170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa (-rw-------)
	I0930 21:08:08.675212   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:08:08.675229   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | About to run SSH command:
	I0930 21:08:08.675244   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | exit 0
	I0930 21:08:08.799368   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | SSH cmd err, output: <nil>: 
	I0930 21:08:08.799751   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetConfigRaw
	I0930 21:08:08.800421   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:08.803151   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803596   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.803620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803922   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:08:08.804195   73900 machine.go:93] provisionDockerMachine start ...
	I0930 21:08:08.804246   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:08.804502   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.806822   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807240   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.807284   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807521   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.807735   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.807890   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.808077   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.808239   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.808480   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.808493   73900 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:08:08.912058   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:08:08.912135   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912407   73900 buildroot.go:166] provisioning hostname "old-k8s-version-621406"
	I0930 21:08:08.912432   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912662   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.915366   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.915750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915892   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.916107   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916330   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916492   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.916673   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.916932   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.916957   73900 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-621406 && echo "old-k8s-version-621406" | sudo tee /etc/hostname
	I0930 21:08:09.034260   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-621406
	
	I0930 21:08:09.034296   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.037149   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037509   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.037538   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037799   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.037986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038163   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038327   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.038473   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.038695   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.038714   73900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-621406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-621406/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-621406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:08:09.152190   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:08:09.152228   73900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:08:09.152255   73900 buildroot.go:174] setting up certificates
	I0930 21:08:09.152275   73900 provision.go:84] configureAuth start
	I0930 21:08:09.152288   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:09.152577   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.155203   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155589   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.155620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155783   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.157964   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158362   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.158392   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158520   73900 provision.go:143] copyHostCerts
	I0930 21:08:09.158592   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:08:09.158605   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:08:09.158704   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:08:09.158851   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:08:09.158864   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:08:09.158895   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:08:09.158970   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:08:09.158977   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:08:09.158996   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:08:09.159054   73900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-621406 san=[127.0.0.1 192.168.72.159 localhost minikube old-k8s-version-621406]
	I0930 21:08:09.301267   73900 provision.go:177] copyRemoteCerts
	I0930 21:08:09.301322   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:08:09.301349   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.304344   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304766   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.304796   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304998   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.305187   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.305321   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.305439   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.390851   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0930 21:08:09.415712   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 21:08:09.439567   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:08:09.463427   73900 provision.go:87] duration metric: took 311.139024ms to configureAuth
	I0930 21:08:09.463459   73900 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:08:09.463713   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:08:09.463809   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.466757   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.467160   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467326   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.467513   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467694   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467843   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.468004   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.468175   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.468190   73900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:08:09.684657   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:08:09.684684   73900 machine.go:96] duration metric: took 880.473418ms to provisionDockerMachine
	I0930 21:08:09.684698   73900 start.go:293] postStartSetup for "old-k8s-version-621406" (driver="kvm2")
	I0930 21:08:09.684709   73900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:08:09.684730   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.685075   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:08:09.685114   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.688051   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688517   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.688542   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688725   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.688928   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.689070   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.689265   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.770572   73900 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:08:09.775149   73900 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:08:09.775181   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:08:09.775268   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:08:09.775364   73900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:08:09.775453   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:08:09.784753   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:09.807989   73900 start.go:296] duration metric: took 123.276522ms for postStartSetup
	I0930 21:08:09.808033   73900 fix.go:56] duration metric: took 19.918922935s for fixHost
	I0930 21:08:09.808053   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.811242   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811656   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.811692   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811852   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.812064   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812239   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812380   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.812522   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.812704   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.812719   73900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:08:09.916349   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730489.889323893
	
	I0930 21:08:09.916376   73900 fix.go:216] guest clock: 1727730489.889323893
	I0930 21:08:09.916384   73900 fix.go:229] Guest: 2024-09-30 21:08:09.889323893 +0000 UTC Remote: 2024-09-30 21:08:09.808037625 +0000 UTC m=+267.093327666 (delta=81.286268ms)
	I0930 21:08:09.916403   73900 fix.go:200] guest clock delta is within tolerance: 81.286268ms
	I0930 21:08:09.916408   73900 start.go:83] releasing machines lock for "old-k8s-version-621406", held for 20.027328296s
	I0930 21:08:09.916440   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.916766   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.919729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920070   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.920105   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920238   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.920831   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921050   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921182   73900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:08:09.921235   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.921328   73900 ssh_runner.go:195] Run: cat /version.json
	I0930 21:08:09.921351   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.924258   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924650   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.924695   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924805   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.924986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.925176   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925206   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.925341   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.925405   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.925534   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925698   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925829   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:10.043500   73900 ssh_runner.go:195] Run: systemctl --version
	I0930 21:08:10.051029   73900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:08:10.199844   73900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:08:10.206433   73900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:08:10.206519   73900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:08:10.223346   73900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:08:10.223375   73900 start.go:495] detecting cgroup driver to use...
	I0930 21:08:10.223449   73900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:08:10.241056   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:08:10.257197   73900 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:08:10.257261   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:08:10.271847   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:08:10.287465   73900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:08:10.419248   73900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:08:10.583440   73900 docker.go:233] disabling docker service ...
	I0930 21:08:10.583518   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:08:10.599561   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:08:10.613321   73900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:08:10.763071   73900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:08:10.891222   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:08:10.906985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:08:10.927838   73900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0930 21:08:10.927911   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.940002   73900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:08:10.940084   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.953143   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.965922   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.985782   73900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:08:11.001825   73900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:08:11.015777   73900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:08:11.015835   73900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:08:11.034821   73900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:08:11.049855   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:11.203755   73900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:08:11.312949   73900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:08:11.313060   73900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:08:11.319280   73900 start.go:563] Will wait 60s for crictl version
	I0930 21:08:11.319355   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:11.323826   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:08:11.374934   73900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:08:11.375023   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.415466   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.449622   73900 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0930 21:08:11.450773   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:11.454019   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454504   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:11.454534   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454807   73900 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0930 21:08:11.459034   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:11.473162   73900 kubeadm.go:883] updating cluster {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:08:11.473294   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:08:11.473367   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:11.518200   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:11.518275   73900 ssh_runner.go:195] Run: which lz4
	I0930 21:08:11.522442   73900 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:08:11.526704   73900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:08:11.526752   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0930 21:08:09.942356   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Start
	I0930 21:08:09.942591   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring networks are active...
	I0930 21:08:09.943619   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring network default is active
	I0930 21:08:09.944145   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring network mk-embed-certs-256103 is active
	I0930 21:08:09.944659   73256 main.go:141] libmachine: (embed-certs-256103) Getting domain xml...
	I0930 21:08:09.945567   73256 main.go:141] libmachine: (embed-certs-256103) Creating domain...
	I0930 21:08:11.376075   73256 main.go:141] libmachine: (embed-certs-256103) Waiting to get IP...
	I0930 21:08:11.377049   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.377588   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.377687   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.377579   75193 retry.go:31] will retry after 219.057799ms: waiting for machine to come up
	I0930 21:08:11.598062   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.598531   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.598568   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.598491   75193 retry.go:31] will retry after 288.150233ms: waiting for machine to come up
	I0930 21:08:11.887894   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.888719   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.888749   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.888678   75193 retry.go:31] will retry after 422.70153ms: waiting for machine to come up
	I0930 21:08:12.313280   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:12.313761   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:12.313790   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:12.313728   75193 retry.go:31] will retry after 403.507934ms: waiting for machine to come up
	I0930 21:08:12.719305   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:12.719705   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:12.719740   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:12.719683   75193 retry.go:31] will retry after 616.261723ms: waiting for machine to come up
	I0930 21:08:13.337223   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:13.337759   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:13.337809   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:13.337727   75193 retry.go:31] will retry after 715.496762ms: waiting for machine to come up
	I0930 21:08:14.054455   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:14.055118   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:14.055155   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:14.055041   75193 retry.go:31] will retry after 1.12512788s: waiting for machine to come up
	I0930 21:08:10.970621   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:13.468795   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:11.501276   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:12.501748   73707 node_ready.go:49] node "default-k8s-diff-port-291511" has status "Ready":"True"
	I0930 21:08:12.501784   73707 node_ready.go:38] duration metric: took 7.005705696s for node "default-k8s-diff-port-291511" to be "Ready" ...
	I0930 21:08:12.501797   73707 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:12.510080   73707 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:12.518496   73707 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:12.518522   73707 pod_ready.go:82] duration metric: took 8.414761ms for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:12.518535   73707 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.526615   73707 pod_ready.go:93] pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:14.526653   73707 pod_ready.go:82] duration metric: took 2.00810944s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.526666   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.533536   73707 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:14.533574   73707 pod_ready.go:82] duration metric: took 6.898769ms for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.533596   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.043003   73707 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.043034   73707 pod_ready.go:82] duration metric: took 509.429109ms for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.043048   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.049645   73707 pod_ready.go:93] pod "kube-proxy-kwp22" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.049676   73707 pod_ready.go:82] duration metric: took 6.618441ms for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.049688   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:13.134916   73900 crio.go:462] duration metric: took 1.612498859s to copy over tarball
	I0930 21:08:13.135038   73900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:08:16.170053   73900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.034985922s)
	I0930 21:08:16.170080   73900 crio.go:469] duration metric: took 3.035125251s to extract the tarball
	I0930 21:08:16.170088   73900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:08:16.213559   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:16.249853   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:16.249876   73900 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 21:08:16.249943   73900 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.249970   73900 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.249987   73900 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.250030   73900 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0930 21:08:16.250031   73900 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.250047   73900 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.250049   73900 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.250083   73900 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0930 21:08:16.251771   73900 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.251768   73900 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.251832   73900 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.251854   73900 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251891   73900 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.252031   73900 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.456847   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.468006   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0930 21:08:16.516253   73900 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0930 21:08:16.516294   73900 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.516336   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.524699   73900 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0930 21:08:16.524743   73900 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0930 21:08:16.524787   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.525738   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.529669   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.561946   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.569090   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.570589   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.571007   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.581971   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.587609   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.630323   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.711058   73900 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0930 21:08:16.711124   73900 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.711190   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.749473   73900 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0930 21:08:16.749521   73900 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.749585   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.769974   73900 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0930 21:08:16.770016   73900 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.770050   73900 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0930 21:08:16.770075   73900 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0930 21:08:16.770087   73900 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.770104   73900 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.770142   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770160   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.770064   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770144   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.788241   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.788292   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.788294   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.788339   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.847727   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0930 21:08:16.847798   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.847894   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.938964   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.939000   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.939053   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0930 21:08:16.939090   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.965556   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.965620   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.020497   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:17.074893   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:17.074950   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.090489   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:17.174117   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0930 21:08:17.174183   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0930 21:08:17.185553   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0930 21:08:17.185619   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0930 21:08:17.506064   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:17.650598   73900 cache_images.go:92] duration metric: took 1.400704992s to LoadCachedImages
	W0930 21:08:17.650695   73900 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0930 21:08:17.650710   73900 kubeadm.go:934] updating node { 192.168.72.159 8443 v1.20.0 crio true true} ...
	I0930 21:08:17.650834   73900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-621406 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:08:17.650922   73900 ssh_runner.go:195] Run: crio config
	I0930 21:08:17.710096   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:08:17.710124   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:17.710139   73900 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:08:17.710164   73900 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-621406 NodeName:old-k8s-version-621406 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0930 21:08:17.710349   73900 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-621406"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:08:17.710425   73900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0930 21:08:17.721028   73900 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:08:17.721111   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:08:17.731462   73900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0930 21:08:17.749715   73900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:08:15.182186   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:15.182722   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:15.182751   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:15.182673   75193 retry.go:31] will retry after 1.385891549s: waiting for machine to come up
	I0930 21:08:16.569882   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:16.570365   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:16.570386   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:16.570309   75193 retry.go:31] will retry after 1.417579481s: waiting for machine to come up
	I0930 21:08:17.989161   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:17.989876   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:17.989905   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:17.989818   75193 retry.go:31] will retry after 1.981651916s: waiting for machine to come up
	I0930 21:08:15.471221   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:17.969140   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:19.969688   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:15.300639   73707 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.300666   73707 pod_ready.go:82] duration metric: took 250.968899ms for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.300679   73707 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:17.349449   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:19.809813   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:17.767565   73900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0930 21:08:17.786411   73900 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0930 21:08:17.790338   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:17.803957   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:17.948898   73900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:17.969102   73900 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406 for IP: 192.168.72.159
	I0930 21:08:17.969133   73900 certs.go:194] generating shared ca certs ...
	I0930 21:08:17.969150   73900 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:17.969338   73900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:08:17.969387   73900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:08:17.969400   73900 certs.go:256] generating profile certs ...
	I0930 21:08:17.969543   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/client.key
	I0930 21:08:17.969621   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key.f3dc5056
	I0930 21:08:17.969674   73900 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key
	I0930 21:08:17.969833   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:08:17.969875   73900 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:08:17.969886   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:08:17.969926   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:08:17.969961   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:08:17.969999   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:08:17.970055   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:17.970794   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:08:18.007954   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:08:18.041538   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:08:18.077886   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:08:18.118644   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0930 21:08:18.151418   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:08:18.199572   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:08:18.235795   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:08:18.272729   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:08:18.298727   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:08:18.324074   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:08:18.351209   73900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:08:18.372245   73900 ssh_runner.go:195] Run: openssl version
	I0930 21:08:18.380047   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:08:18.395332   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401407   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401479   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.407744   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:08:18.422801   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:08:18.437946   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443864   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443938   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.451554   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:08:18.466856   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:08:18.479324   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484321   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484383   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.490341   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:08:18.503117   73900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:08:18.507986   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:08:18.514974   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:08:18.522140   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:08:18.529366   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:08:18.536056   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:08:18.542787   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:08:18.550311   73900 kubeadm.go:392] StartCluster: {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:08:18.550431   73900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:08:18.550498   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.593041   73900 cri.go:89] found id: ""
	I0930 21:08:18.593116   73900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:08:18.603410   73900 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:08:18.603432   73900 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:08:18.603479   73900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:08:18.614635   73900 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:08:18.615758   73900 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-621406" does not appear in /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:08:18.616488   73900 kubeconfig.go:62] /home/jenkins/minikube-integration/19736-7672/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-621406" cluster setting kubeconfig missing "old-k8s-version-621406" context setting]
	I0930 21:08:18.617394   73900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:18.644144   73900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:08:18.655764   73900 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.159
	I0930 21:08:18.655806   73900 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:08:18.655819   73900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:08:18.655877   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.699283   73900 cri.go:89] found id: ""
	I0930 21:08:18.699376   73900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:08:18.715248   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:08:18.724905   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:08:18.724945   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:08:18.724990   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:08:18.735611   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:08:18.735682   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:08:18.745604   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:08:18.755199   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:08:18.755261   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:08:18.765450   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.775187   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:08:18.775268   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.788080   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:08:18.800668   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:08:18.800727   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:08:18.814084   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:08:18.823785   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:18.961698   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.495418   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.713653   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.812667   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.921314   73900 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:08:19.921414   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.422349   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.922222   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.422364   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.921493   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:22.421640   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:19.973478   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:19.973916   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:19.973946   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:19.973868   75193 retry.go:31] will retry after 2.33355272s: waiting for machine to come up
	I0930 21:08:22.308828   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:22.309471   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:22.309498   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:22.309367   75193 retry.go:31] will retry after 3.484225075s: waiting for machine to come up
	I0930 21:08:21.970954   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:24.467778   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:22.310464   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:24.806425   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:22.922418   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.421851   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.921502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.422346   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.922000   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.422290   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.922213   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.422100   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.922239   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:27.421729   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.795265   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:25.795755   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:25.795781   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:25.795707   75193 retry.go:31] will retry after 2.983975719s: waiting for machine to come up
	I0930 21:08:28.780767   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.781201   73256 main.go:141] libmachine: (embed-certs-256103) Found IP for machine: 192.168.39.90
	I0930 21:08:28.781223   73256 main.go:141] libmachine: (embed-certs-256103) Reserving static IP address...
	I0930 21:08:28.781237   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has current primary IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.781655   73256 main.go:141] libmachine: (embed-certs-256103) Reserved static IP address: 192.168.39.90
	I0930 21:08:28.781679   73256 main.go:141] libmachine: (embed-certs-256103) Waiting for SSH to be available...
	I0930 21:08:28.781697   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "embed-certs-256103", mac: "52:54:00:7a:01:01", ip: "192.168.39.90"} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.781724   73256 main.go:141] libmachine: (embed-certs-256103) DBG | skip adding static IP to network mk-embed-certs-256103 - found existing host DHCP lease matching {name: "embed-certs-256103", mac: "52:54:00:7a:01:01", ip: "192.168.39.90"}
	I0930 21:08:28.781735   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Getting to WaitForSSH function...
	I0930 21:08:28.784310   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.784703   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.784737   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.784861   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Using SSH client type: external
	I0930 21:08:28.784899   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa (-rw-------)
	I0930 21:08:28.784933   73256 main.go:141] libmachine: (embed-certs-256103) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:08:28.784953   73256 main.go:141] libmachine: (embed-certs-256103) DBG | About to run SSH command:
	I0930 21:08:28.784970   73256 main.go:141] libmachine: (embed-certs-256103) DBG | exit 0
	I0930 21:08:28.911300   73256 main.go:141] libmachine: (embed-certs-256103) DBG | SSH cmd err, output: <nil>: 
	I0930 21:08:28.911716   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetConfigRaw
	I0930 21:08:28.912335   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:28.914861   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.915283   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.915304   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.915620   73256 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/config.json ...
	I0930 21:08:28.915874   73256 machine.go:93] provisionDockerMachine start ...
	I0930 21:08:28.915902   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:28.916117   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:28.918357   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.918661   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.918696   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.918813   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:28.918992   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:28.919143   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:28.919296   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:28.919472   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:28.919680   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:28.919691   73256 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:08:29.032537   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:08:29.032579   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.032830   73256 buildroot.go:166] provisioning hostname "embed-certs-256103"
	I0930 21:08:29.032857   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.033039   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.035951   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.036403   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.036435   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.036598   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.036795   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.037002   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.037175   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.037339   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.037538   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.037556   73256 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-256103 && echo "embed-certs-256103" | sudo tee /etc/hostname
	I0930 21:08:29.163250   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-256103
	
	I0930 21:08:29.163278   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.165937   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.166260   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.166296   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.166529   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.166722   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.166913   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.167055   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.167223   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.167454   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.167477   73256 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-256103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-256103/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-256103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:08:29.288197   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:08:29.288236   73256 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:08:29.288292   73256 buildroot.go:174] setting up certificates
	I0930 21:08:29.288307   73256 provision.go:84] configureAuth start
	I0930 21:08:29.288322   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.288589   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:29.291598   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.292026   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.292059   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.292247   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.294760   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.295144   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.295169   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.295421   73256 provision.go:143] copyHostCerts
	I0930 21:08:29.295497   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:08:29.295510   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:08:29.295614   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:08:29.295743   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:08:29.295754   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:08:29.295782   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:08:29.295855   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:08:29.295864   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:08:29.295886   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:08:29.295948   73256 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.embed-certs-256103 san=[127.0.0.1 192.168.39.90 embed-certs-256103 localhost minikube]
	I0930 21:08:26.468058   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:28.468510   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:26.808360   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:29.307500   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:29.742069   73256 provision.go:177] copyRemoteCerts
	I0930 21:08:29.742134   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:08:29.742156   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.745411   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.745805   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.745835   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.746023   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.746215   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.746351   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.746557   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:29.833888   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:08:29.857756   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0930 21:08:29.883087   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:08:29.905795   73256 provision.go:87] duration metric: took 617.470984ms to configureAuth
	I0930 21:08:29.905831   73256 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:08:29.906028   73256 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:08:29.906098   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.908911   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.909307   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.909335   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.909524   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.909711   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.909876   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.909996   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.910157   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.910429   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.910454   73256 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:08:30.140191   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:08:30.140217   73256 machine.go:96] duration metric: took 1.224326296s to provisionDockerMachine
	I0930 21:08:30.140227   73256 start.go:293] postStartSetup for "embed-certs-256103" (driver="kvm2")
	I0930 21:08:30.140237   73256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:08:30.140252   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.140624   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:08:30.140648   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.143906   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.144300   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.144339   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.144498   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.144695   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.144846   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.145052   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.230069   73256 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:08:30.233845   73256 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:08:30.233868   73256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:08:30.233948   73256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:08:30.234050   73256 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:08:30.234168   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:08:30.243066   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:30.266197   73256 start.go:296] duration metric: took 125.955153ms for postStartSetup
	I0930 21:08:30.266234   73256 fix.go:56] duration metric: took 20.349643145s for fixHost
	I0930 21:08:30.266252   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.269025   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.269405   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.269433   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.269576   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.269784   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.269910   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.270042   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.270176   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:30.270380   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:30.270392   73256 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:08:30.380023   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730510.354607586
	
	I0930 21:08:30.380057   73256 fix.go:216] guest clock: 1727730510.354607586
	I0930 21:08:30.380067   73256 fix.go:229] Guest: 2024-09-30 21:08:30.354607586 +0000 UTC Remote: 2024-09-30 21:08:30.266237543 +0000 UTC m=+355.815232104 (delta=88.370043ms)
	I0930 21:08:30.380085   73256 fix.go:200] guest clock delta is within tolerance: 88.370043ms
	I0930 21:08:30.380091   73256 start.go:83] releasing machines lock for "embed-certs-256103", held for 20.463544222s
	I0930 21:08:30.380113   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.380429   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:30.382992   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.383349   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.383369   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.383518   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384071   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384245   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384310   73256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:08:30.384374   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.384442   73256 ssh_runner.go:195] Run: cat /version.json
	I0930 21:08:30.384464   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.387098   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387342   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387413   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.387435   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387633   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.387762   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.387783   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387828   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.387931   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.388003   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.388058   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.388159   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.388208   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.388347   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.510981   73256 ssh_runner.go:195] Run: systemctl --version
	I0930 21:08:30.517215   73256 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:08:30.663491   73256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:08:30.669568   73256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:08:30.669652   73256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:08:30.686640   73256 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:08:30.686663   73256 start.go:495] detecting cgroup driver to use...
	I0930 21:08:30.686737   73256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:08:30.703718   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:08:30.718743   73256 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:08:30.718807   73256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:08:30.733695   73256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:08:30.748690   73256 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:08:30.878084   73256 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:08:31.040955   73256 docker.go:233] disabling docker service ...
	I0930 21:08:31.041030   73256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:08:31.055212   73256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:08:31.067968   73256 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:08:31.185043   73256 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:08:31.300909   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:08:31.315167   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:08:31.333483   73256 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:08:31.333537   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.343599   73256 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:08:31.343694   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.353739   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.363993   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.375183   73256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:08:31.385478   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.395632   73256 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.412995   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.423277   73256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:08:31.433183   73256 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:08:31.433253   73256 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:08:31.446796   73256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:08:31.456912   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:31.571729   73256 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:08:31.663944   73256 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:08:31.664019   73256 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:08:31.669128   73256 start.go:563] Will wait 60s for crictl version
	I0930 21:08:31.669191   73256 ssh_runner.go:195] Run: which crictl
	I0930 21:08:31.672922   73256 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:08:31.709488   73256 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:08:31.709596   73256 ssh_runner.go:195] Run: crio --version
	I0930 21:08:31.738743   73256 ssh_runner.go:195] Run: crio --version
	I0930 21:08:31.771638   73256 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:08:27.922374   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.921870   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.421786   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.921804   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.421482   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.921969   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.422241   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.922148   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:32.421504   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.773186   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:31.776392   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:31.776770   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:31.776810   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:31.777016   73256 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 21:08:31.781212   73256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:31.793839   73256 kubeadm.go:883] updating cluster {Name:embed-certs-256103 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:08:31.793957   73256 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:08:31.794015   73256 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:31.834036   73256 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:08:31.834094   73256 ssh_runner.go:195] Run: which lz4
	I0930 21:08:31.837877   73256 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:08:31.842038   73256 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:08:31.842073   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 21:08:33.150975   73256 crio.go:462] duration metric: took 1.313131374s to copy over tarball
	I0930 21:08:33.151080   73256 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:08:30.469523   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:32.469562   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:34.969818   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:31.307560   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:33.308130   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:32.921516   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.421576   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.922082   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.421599   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.922178   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.422199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.922061   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.421860   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.921513   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.422162   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.294750   73256 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.143629494s)
	I0930 21:08:35.294785   73256 crio.go:469] duration metric: took 2.143777794s to extract the tarball
	I0930 21:08:35.294794   73256 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:08:35.340151   73256 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:35.385329   73256 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 21:08:35.385359   73256 cache_images.go:84] Images are preloaded, skipping loading
	I0930 21:08:35.385366   73256 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.31.1 crio true true} ...
	I0930 21:08:35.385463   73256 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-256103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:08:35.385536   73256 ssh_runner.go:195] Run: crio config
	I0930 21:08:35.433043   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:08:35.433072   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:35.433084   73256 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:08:35.433113   73256 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-256103 NodeName:embed-certs-256103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:08:35.433277   73256 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-256103"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:08:35.433348   73256 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:08:35.443627   73256 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:08:35.443713   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:08:35.453095   73256 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0930 21:08:35.469517   73256 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:08:35.486869   73256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0930 21:08:35.504871   73256 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0930 21:08:35.508507   73256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:35.521994   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:35.641971   73256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:35.657660   73256 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103 for IP: 192.168.39.90
	I0930 21:08:35.657686   73256 certs.go:194] generating shared ca certs ...
	I0930 21:08:35.657705   73256 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:35.657878   73256 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:08:35.657941   73256 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:08:35.657954   73256 certs.go:256] generating profile certs ...
	I0930 21:08:35.658095   73256 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/client.key
	I0930 21:08:35.658177   73256 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.key.52e83f0c
	I0930 21:08:35.658230   73256 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.key
	I0930 21:08:35.658391   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:08:35.658431   73256 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:08:35.658443   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:08:35.658476   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:08:35.658509   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:08:35.658539   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:08:35.658586   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:35.659279   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:08:35.695254   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:08:35.718948   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:08:35.742442   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:08:35.765859   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0930 21:08:35.792019   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:08:35.822081   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:08:35.845840   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:08:35.871635   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:08:35.896069   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:08:35.921595   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:08:35.946620   73256 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:08:35.963340   73256 ssh_runner.go:195] Run: openssl version
	I0930 21:08:35.970540   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:08:35.982269   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.987494   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.987646   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.994312   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:08:36.006173   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:08:36.017605   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.022126   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.022190   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.027806   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:08:36.038388   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:08:36.048818   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.053230   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.053296   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.058713   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:08:36.070806   73256 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:08:36.075521   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:08:36.081310   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:08:36.086935   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:08:36.092990   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:08:36.098783   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:08:36.104354   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:08:36.110289   73256 kubeadm.go:392] StartCluster: {Name:embed-certs-256103 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:08:36.110411   73256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:08:36.110495   73256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:36.153770   73256 cri.go:89] found id: ""
	I0930 21:08:36.153852   73256 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:08:36.164301   73256 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:08:36.164320   73256 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:08:36.164363   73256 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:08:36.173860   73256 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:08:36.174950   73256 kubeconfig.go:125] found "embed-certs-256103" server: "https://192.168.39.90:8443"
	I0930 21:08:36.177584   73256 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:08:36.186946   73256 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I0930 21:08:36.186984   73256 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:08:36.186998   73256 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:08:36.187045   73256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:36.223259   73256 cri.go:89] found id: ""
	I0930 21:08:36.223328   73256 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:08:36.239321   73256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:08:36.248508   73256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:08:36.248528   73256 kubeadm.go:157] found existing configuration files:
	
	I0930 21:08:36.248571   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:08:36.257483   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:08:36.257537   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:08:36.266792   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:08:36.275626   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:08:36.275697   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:08:36.285000   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:08:36.293923   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:08:36.293977   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:08:36.303990   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:08:36.313104   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:08:36.313158   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:08:36.322423   73256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:08:36.332005   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:36.457666   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.309316   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.533114   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.602999   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.692027   73256 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:08:37.692117   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.192813   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.692777   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.192862   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.469941   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:39.506753   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:35.311295   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:37.806923   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:39.808338   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:37.921497   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.422360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.922305   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.422480   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.922279   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.422089   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.922021   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.421727   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.921519   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:42.422193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.692193   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.192178   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.209649   73256 api_server.go:72] duration metric: took 2.517618424s to wait for apiserver process to appear ...
	I0930 21:08:40.209676   73256 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:08:40.209699   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.034828   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:43.034857   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:43.034871   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.080073   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:43.080107   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:43.210448   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.217768   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:43.217799   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:43.710066   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.722379   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:43.722428   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:44.209939   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:44.219468   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:44.219500   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:44.709767   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:44.714130   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I0930 21:08:44.720194   73256 api_server.go:141] control plane version: v1.31.1
	I0930 21:08:44.720221   73256 api_server.go:131] duration metric: took 4.510539442s to wait for apiserver health ...
	I0930 21:08:44.720230   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:08:44.720236   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:44.721740   73256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:08:41.968377   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:44.469477   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:41.808473   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:43.808575   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:42.922495   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.422250   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.922413   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.421962   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.921682   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.422144   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.922206   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.422020   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.921960   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:47.422296   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.722947   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:08:44.733426   73256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:08:44.750426   73256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:08:44.761259   73256 system_pods.go:59] 8 kube-system pods found
	I0930 21:08:44.761303   73256 system_pods.go:61] "coredns-7c65d6cfc9-h6cl2" [548e3751-edc9-4232-87c2-2e64769ba332] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:08:44.761314   73256 system_pods.go:61] "etcd-embed-certs-256103" [6eef2e96-d4bf-4dd6-bd5c-bfb05c306182] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:08:44.761326   73256 system_pods.go:61] "kube-apiserver-embed-certs-256103" [81c02a52-aca7-4b9c-b7b1-680d27f48d40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:08:44.761335   73256 system_pods.go:61] "kube-controller-manager-embed-certs-256103" [752f0966-7718-4523-8ba6-affd41bc956e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:08:44.761346   73256 system_pods.go:61] "kube-proxy-fqvg2" [284a63a1-d624-4bf3-8509-14ff0845f3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 21:08:44.761354   73256 system_pods.go:61] "kube-scheduler-embed-certs-256103" [6158a51d-82ae-490a-96d3-c0e61a3485f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:08:44.761363   73256 system_pods.go:61] "metrics-server-6867b74b74-hkp9m" [8774a772-bb72-4419-96fd-50ca5f48a5b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:08:44.761374   73256 system_pods.go:61] "storage-provisioner" [9649e71d-cd21-4846-bf66-1c5b469500ba] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 21:08:44.761385   73256 system_pods.go:74] duration metric: took 10.935916ms to wait for pod list to return data ...
	I0930 21:08:44.761397   73256 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:08:44.771745   73256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:08:44.771777   73256 node_conditions.go:123] node cpu capacity is 2
	I0930 21:08:44.771789   73256 node_conditions.go:105] duration metric: took 10.386814ms to run NodePressure ...
	I0930 21:08:44.771810   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:45.064019   73256 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:08:45.070479   73256 kubeadm.go:739] kubelet initialised
	I0930 21:08:45.070508   73256 kubeadm.go:740] duration metric: took 6.461143ms waiting for restarted kubelet to initialise ...
	I0930 21:08:45.070517   73256 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:45.074627   73256 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.080873   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.080897   73256 pod_ready.go:82] duration metric: took 6.244301ms for pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.080906   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.080912   73256 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.086787   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "etcd-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.086818   73256 pod_ready.go:82] duration metric: took 5.898265ms for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.086829   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "etcd-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.086837   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.092860   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.092892   73256 pod_ready.go:82] duration metric: took 6.044766ms for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.092904   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.092912   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.154246   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.154271   73256 pod_ready.go:82] duration metric: took 61.348653ms for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.154281   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.154287   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fqvg2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.554606   73256 pod_ready.go:93] pod "kube-proxy-fqvg2" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:45.554630   73256 pod_ready.go:82] duration metric: took 400.335084ms for pod "kube-proxy-fqvg2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.554639   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:47.559998   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:46.968101   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:48.968649   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:46.307946   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:48.806624   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:47.921903   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.422535   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.921484   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.421909   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.922117   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.421606   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.921728   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.421600   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.921716   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:52.421873   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.561176   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:51.562227   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:54.060692   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:51.467375   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:53.473247   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:50.807821   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:53.307163   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:52.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.921496   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.421866   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.921995   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.421476   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.421660   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.922489   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:57.422291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.562740   73256 pod_ready.go:93] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:54.562765   73256 pod_ready.go:82] duration metric: took 9.008120147s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:54.562775   73256 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:56.570517   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:59.070065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:55.969724   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:58.467585   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:55.807669   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:58.305837   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:57.921737   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.922007   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.422173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.921803   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.421596   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.922123   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.422186   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.921898   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:02.421894   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.070940   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:03.569053   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:00.469160   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.968692   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:00.308195   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.807474   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:04.808710   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.922329   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.421922   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.922360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.421875   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.922544   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.421939   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.921693   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.422056   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.921627   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:07.422125   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.070166   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:08.568945   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:05.467300   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.469409   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:09.968053   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.306237   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:09.306644   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.921687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.421694   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.922234   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.421817   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.921704   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.422030   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.921597   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.421700   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.922301   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:12.421567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.569444   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:13.069582   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:11.970180   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:14.469440   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:11.307287   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:13.307376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:12.922171   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.422423   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.921941   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.422494   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.922454   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.421776   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.922567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.421713   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.922449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:17.421644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.569398   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:18.069177   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:16.968663   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:19.468171   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:15.808689   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:18.307774   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:17.922098   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.922084   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.421717   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.922095   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:19.922178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:19.962975   73900 cri.go:89] found id: ""
	I0930 21:09:19.963002   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.963014   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:19.963020   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:19.963073   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:19.999741   73900 cri.go:89] found id: ""
	I0930 21:09:19.999769   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.999777   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:19.999782   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:19.999840   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:20.035818   73900 cri.go:89] found id: ""
	I0930 21:09:20.035844   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.035856   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:20.035863   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:20.035924   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:20.072005   73900 cri.go:89] found id: ""
	I0930 21:09:20.072032   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.072042   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:20.072048   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:20.072110   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:20.108229   73900 cri.go:89] found id: ""
	I0930 21:09:20.108258   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.108314   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:20.108325   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:20.108383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:20.141331   73900 cri.go:89] found id: ""
	I0930 21:09:20.141388   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.141398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:20.141406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:20.141466   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:20.175133   73900 cri.go:89] found id: ""
	I0930 21:09:20.175161   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.175169   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:20.175175   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:20.175223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:20.210529   73900 cri.go:89] found id: ""
	I0930 21:09:20.210566   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.210578   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:20.210594   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:20.210608   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:20.261055   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:20.261095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:20.274212   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:20.274239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:20.406215   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:20.406246   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:20.406282   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:20.481758   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:20.481794   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:20.069672   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:22.569421   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:21.468616   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:23.468820   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:20.309317   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:22.807149   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:24.807293   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:23.019687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:23.033394   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:23.033450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:23.078558   73900 cri.go:89] found id: ""
	I0930 21:09:23.078592   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.078604   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:23.078611   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:23.078673   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:23.117833   73900 cri.go:89] found id: ""
	I0930 21:09:23.117860   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.117868   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:23.117875   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:23.117931   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:23.157299   73900 cri.go:89] found id: ""
	I0930 21:09:23.157337   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.157359   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:23.157367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:23.157438   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:23.196545   73900 cri.go:89] found id: ""
	I0930 21:09:23.196570   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.196579   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:23.196586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:23.196644   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:23.229359   73900 cri.go:89] found id: ""
	I0930 21:09:23.229390   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.229401   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:23.229409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:23.229471   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:23.264847   73900 cri.go:89] found id: ""
	I0930 21:09:23.264881   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.264893   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:23.264900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:23.264962   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:23.298657   73900 cri.go:89] found id: ""
	I0930 21:09:23.298687   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.298695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:23.298701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:23.298750   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:23.333787   73900 cri.go:89] found id: ""
	I0930 21:09:23.333816   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.333826   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:23.333836   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:23.333851   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:23.386311   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:23.386347   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:23.400096   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:23.400129   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:23.481724   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:23.481748   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:23.481780   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:23.561080   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:23.561119   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.122460   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:26.136409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:26.136495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:26.170785   73900 cri.go:89] found id: ""
	I0930 21:09:26.170818   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.170832   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:26.170866   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:26.170945   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:26.205211   73900 cri.go:89] found id: ""
	I0930 21:09:26.205265   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.205275   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:26.205281   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:26.205335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:26.239242   73900 cri.go:89] found id: ""
	I0930 21:09:26.239276   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.239285   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:26.239291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:26.239337   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:26.272908   73900 cri.go:89] found id: ""
	I0930 21:09:26.272932   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.272940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:26.272946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:26.272993   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:26.311599   73900 cri.go:89] found id: ""
	I0930 21:09:26.311625   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.311632   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:26.311639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:26.311684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:26.345719   73900 cri.go:89] found id: ""
	I0930 21:09:26.345746   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.345754   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:26.345760   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:26.345816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:26.383513   73900 cri.go:89] found id: ""
	I0930 21:09:26.383562   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.383572   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:26.383578   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:26.383637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:26.418533   73900 cri.go:89] found id: ""
	I0930 21:09:26.418565   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.418574   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:26.418584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:26.418594   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.456635   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:26.456660   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:26.507639   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:26.507686   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:26.521069   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:26.521095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:26.594745   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:26.594768   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:26.594781   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:24.569626   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:26.570133   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.069071   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:25.968851   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:27.974091   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:26.808336   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.308328   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.180142   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:29.194730   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:29.194785   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:29.234054   73900 cri.go:89] found id: ""
	I0930 21:09:29.234094   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.234103   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:29.234109   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:29.234156   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:29.280869   73900 cri.go:89] found id: ""
	I0930 21:09:29.280896   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.280907   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:29.280914   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:29.280988   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:29.348376   73900 cri.go:89] found id: ""
	I0930 21:09:29.348406   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.348417   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:29.348424   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:29.348491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:29.404218   73900 cri.go:89] found id: ""
	I0930 21:09:29.404251   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.404261   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:29.404268   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:29.404344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:29.449029   73900 cri.go:89] found id: ""
	I0930 21:09:29.449053   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.449061   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:29.449066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:29.449127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:29.484917   73900 cri.go:89] found id: ""
	I0930 21:09:29.484939   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.484948   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:29.484954   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:29.485002   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:29.517150   73900 cri.go:89] found id: ""
	I0930 21:09:29.517177   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.517185   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:29.517191   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:29.517259   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:29.550410   73900 cri.go:89] found id: ""
	I0930 21:09:29.550443   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.550452   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:29.550461   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:29.550472   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:29.601757   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:29.601803   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:29.616266   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:29.616299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:29.686206   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:29.686228   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:29.686240   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:29.761765   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:29.761810   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:32.299199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:32.315047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:32.315125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:32.349784   73900 cri.go:89] found id: ""
	I0930 21:09:32.349810   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.349819   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:32.349824   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:32.349871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:32.385887   73900 cri.go:89] found id: ""
	I0930 21:09:32.385916   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.385927   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:32.385935   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:32.385994   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:32.421746   73900 cri.go:89] found id: ""
	I0930 21:09:32.421776   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.421789   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:32.421796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:32.421856   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:32.459361   73900 cri.go:89] found id: ""
	I0930 21:09:32.459391   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.459404   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:32.459411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:32.459470   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:32.495919   73900 cri.go:89] found id: ""
	I0930 21:09:32.495947   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.495960   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:32.495966   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:32.496025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:32.533626   73900 cri.go:89] found id: ""
	I0930 21:09:32.533652   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.533663   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:32.533670   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:32.533729   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:32.567577   73900 cri.go:89] found id: ""
	I0930 21:09:32.567610   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.567623   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:32.567630   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:32.567687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:32.604949   73900 cri.go:89] found id: ""
	I0930 21:09:32.604981   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.604991   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:32.605001   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:32.605014   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:32.656781   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:32.656822   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:32.670116   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:32.670144   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:32.736712   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:32.736736   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:32.736751   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:31.070228   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:33.569488   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:30.469162   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:32.469874   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:34.967596   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:31.807682   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:33.807723   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:32.813502   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:32.813556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:35.354372   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:35.369226   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:35.369303   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:35.408374   73900 cri.go:89] found id: ""
	I0930 21:09:35.408402   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.408414   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:35.408421   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:35.408481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:35.442390   73900 cri.go:89] found id: ""
	I0930 21:09:35.442432   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.442440   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:35.442445   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:35.442524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:35.479624   73900 cri.go:89] found id: ""
	I0930 21:09:35.479651   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.479659   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:35.479664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:35.479711   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:35.518580   73900 cri.go:89] found id: ""
	I0930 21:09:35.518609   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.518617   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:35.518623   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:35.518675   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:35.553547   73900 cri.go:89] found id: ""
	I0930 21:09:35.553582   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.553590   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:35.553604   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:35.553669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:35.596444   73900 cri.go:89] found id: ""
	I0930 21:09:35.596476   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.596487   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:35.596495   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:35.596583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:35.634232   73900 cri.go:89] found id: ""
	I0930 21:09:35.634259   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.634268   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:35.634274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:35.634322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:35.669637   73900 cri.go:89] found id: ""
	I0930 21:09:35.669672   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.669683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:35.669694   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:35.669706   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:35.719433   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:35.719469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:35.733383   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:35.733415   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:35.811860   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:35.811887   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:35.811913   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:35.896206   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:35.896272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:35.569694   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:37.570548   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:36.968789   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.968959   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:35.814006   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.306676   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.435999   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:38.450091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:38.450152   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:38.489127   73900 cri.go:89] found id: ""
	I0930 21:09:38.489153   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.489161   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:38.489166   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:38.489221   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:38.520760   73900 cri.go:89] found id: ""
	I0930 21:09:38.520783   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.520792   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:38.520798   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:38.520847   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:38.556279   73900 cri.go:89] found id: ""
	I0930 21:09:38.556306   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.556315   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:38.556319   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:38.556379   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:38.590804   73900 cri.go:89] found id: ""
	I0930 21:09:38.590827   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.590834   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:38.590840   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:38.590906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:38.624765   73900 cri.go:89] found id: ""
	I0930 21:09:38.624792   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.624800   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:38.624805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:38.624857   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:38.660587   73900 cri.go:89] found id: ""
	I0930 21:09:38.660614   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.660625   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:38.660635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:38.660702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:38.693314   73900 cri.go:89] found id: ""
	I0930 21:09:38.693352   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.693362   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:38.693371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:38.693441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:38.729163   73900 cri.go:89] found id: ""
	I0930 21:09:38.729197   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.729212   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:38.729223   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:38.729235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:38.780787   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:38.780828   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:38.794983   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:38.795009   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:38.861886   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:38.861911   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:38.861926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:38.936958   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:38.936994   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:41.479891   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:41.493041   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:41.493106   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:41.528855   73900 cri.go:89] found id: ""
	I0930 21:09:41.528889   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.528900   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:41.528906   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:41.528967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:41.565193   73900 cri.go:89] found id: ""
	I0930 21:09:41.565216   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.565224   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:41.565230   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:41.565289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:41.599503   73900 cri.go:89] found id: ""
	I0930 21:09:41.599538   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.599547   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:41.599553   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:41.599611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:41.636623   73900 cri.go:89] found id: ""
	I0930 21:09:41.636651   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.636663   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:41.636671   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:41.636728   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:41.671727   73900 cri.go:89] found id: ""
	I0930 21:09:41.671753   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.671760   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:41.671765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:41.671819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:41.705499   73900 cri.go:89] found id: ""
	I0930 21:09:41.705533   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.705543   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:41.705549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:41.705602   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:41.738262   73900 cri.go:89] found id: ""
	I0930 21:09:41.738285   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.738292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:41.738297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:41.738351   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:41.774232   73900 cri.go:89] found id: ""
	I0930 21:09:41.774261   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.774269   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:41.774277   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:41.774288   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:41.826060   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:41.826093   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:41.839308   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:41.839335   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:41.908599   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:41.908626   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:41.908640   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:41.986337   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:41.986375   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:40.069900   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:42.070035   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:41.469908   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:43.968111   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:40.307200   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:42.308356   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:44.807663   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:44.527015   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:44.539973   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:44.540036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:44.575985   73900 cri.go:89] found id: ""
	I0930 21:09:44.576012   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.576021   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:44.576027   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:44.576076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:44.612693   73900 cri.go:89] found id: ""
	I0930 21:09:44.612724   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.612736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:44.612743   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:44.612809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:44.646515   73900 cri.go:89] found id: ""
	I0930 21:09:44.646544   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.646555   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:44.646562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:44.646623   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:44.679980   73900 cri.go:89] found id: ""
	I0930 21:09:44.680011   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.680022   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:44.680030   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:44.680089   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:44.714078   73900 cri.go:89] found id: ""
	I0930 21:09:44.714117   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.714128   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:44.714135   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:44.714193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:44.748491   73900 cri.go:89] found id: ""
	I0930 21:09:44.748521   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.748531   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:44.748539   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:44.748618   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:44.780902   73900 cri.go:89] found id: ""
	I0930 21:09:44.780936   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.780947   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:44.780955   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:44.781013   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:44.817944   73900 cri.go:89] found id: ""
	I0930 21:09:44.817999   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.818011   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:44.818022   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:44.818038   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:44.873896   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:44.873926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:44.887829   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:44.887858   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:44.957562   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:44.957584   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:44.957598   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:45.037892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:45.037934   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:47.583013   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:47.595799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:47.595870   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:47.630348   73900 cri.go:89] found id: ""
	I0930 21:09:47.630377   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.630385   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:47.630391   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:47.630444   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:47.663416   73900 cri.go:89] found id: ""
	I0930 21:09:47.663440   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.663448   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:47.663454   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:47.663500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:47.700145   73900 cri.go:89] found id: ""
	I0930 21:09:47.700174   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.700184   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:47.700192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:47.700253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:47.732539   73900 cri.go:89] found id: ""
	I0930 21:09:47.732567   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.732577   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:47.732583   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:47.732637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:44.569951   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:46.570501   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:48.574018   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:45.971063   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:48.468661   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:47.307709   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:49.806843   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:47.764470   73900 cri.go:89] found id: ""
	I0930 21:09:47.764493   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.764501   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:47.764507   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:47.764553   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:47.802365   73900 cri.go:89] found id: ""
	I0930 21:09:47.802393   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.802403   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:47.802411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:47.802468   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:47.836504   73900 cri.go:89] found id: ""
	I0930 21:09:47.836531   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.836542   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:47.836549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:47.836611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:47.870315   73900 cri.go:89] found id: ""
	I0930 21:09:47.870338   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.870351   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:47.870359   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:47.870370   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:47.919974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:47.920011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:47.934157   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:47.934190   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:48.003046   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:48.003072   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:48.003085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:48.084947   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:48.084985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:50.624791   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:50.638118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:50.638196   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:50.672448   73900 cri.go:89] found id: ""
	I0930 21:09:50.672479   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.672488   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:50.672503   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:50.672557   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:50.706057   73900 cri.go:89] found id: ""
	I0930 21:09:50.706080   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.706088   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:50.706093   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:50.706142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:50.738101   73900 cri.go:89] found id: ""
	I0930 21:09:50.738126   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.738134   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:50.738140   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:50.738207   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:50.772483   73900 cri.go:89] found id: ""
	I0930 21:09:50.772508   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.772516   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:50.772522   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:50.772581   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:50.805169   73900 cri.go:89] found id: ""
	I0930 21:09:50.805200   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.805211   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:50.805220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:50.805276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:50.842144   73900 cri.go:89] found id: ""
	I0930 21:09:50.842168   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.842176   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:50.842182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:50.842236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:50.875512   73900 cri.go:89] found id: ""
	I0930 21:09:50.875563   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.875575   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:50.875582   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:50.875643   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:50.909549   73900 cri.go:89] found id: ""
	I0930 21:09:50.909580   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.909591   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:50.909599   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:50.909610   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:50.962064   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:50.962098   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:50.976979   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:50.977012   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:51.053784   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:51.053815   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:51.053833   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:51.130939   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:51.130975   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:51.069919   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:53.568708   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:50.468737   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:52.968935   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:52.306733   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:54.306875   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:53.667675   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:53.680381   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:53.680449   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:53.712759   73900 cri.go:89] found id: ""
	I0930 21:09:53.712791   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.712800   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:53.712807   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:53.712871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:53.748958   73900 cri.go:89] found id: ""
	I0930 21:09:53.748990   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.749002   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:53.749009   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:53.749078   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:53.783243   73900 cri.go:89] found id: ""
	I0930 21:09:53.783272   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.783282   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:53.783289   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:53.783382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:53.823848   73900 cri.go:89] found id: ""
	I0930 21:09:53.823875   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.823883   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:53.823890   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:53.823941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:53.865607   73900 cri.go:89] found id: ""
	I0930 21:09:53.865635   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.865643   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:53.865648   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:53.865693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:53.900888   73900 cri.go:89] found id: ""
	I0930 21:09:53.900912   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.900920   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:53.900926   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:53.900985   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:53.933688   73900 cri.go:89] found id: ""
	I0930 21:09:53.933717   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.933728   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:53.933736   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:53.933798   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:53.968702   73900 cri.go:89] found id: ""
	I0930 21:09:53.968731   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.968740   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:53.968749   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:53.968760   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:54.021588   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:54.021626   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:54.036681   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:54.036719   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:54.112189   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:54.112209   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:54.112223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:54.185028   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:54.185085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:56.725146   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:56.739358   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:56.739421   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:56.779278   73900 cri.go:89] found id: ""
	I0930 21:09:56.779313   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.779322   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:56.779329   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:56.779377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:56.815972   73900 cri.go:89] found id: ""
	I0930 21:09:56.816000   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.816011   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:56.816018   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:56.816084   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:56.849425   73900 cri.go:89] found id: ""
	I0930 21:09:56.849458   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.849471   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:56.849478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:56.849542   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:56.885483   73900 cri.go:89] found id: ""
	I0930 21:09:56.885510   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.885520   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:56.885527   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:56.885586   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:56.917832   73900 cri.go:89] found id: ""
	I0930 21:09:56.917862   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.917872   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:56.917879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:56.917932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:56.951613   73900 cri.go:89] found id: ""
	I0930 21:09:56.951643   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.951654   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:56.951664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:56.951726   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:56.987577   73900 cri.go:89] found id: ""
	I0930 21:09:56.987608   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.987620   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:56.987628   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:56.987691   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:57.024871   73900 cri.go:89] found id: ""
	I0930 21:09:57.024903   73900 logs.go:276] 0 containers: []
	W0930 21:09:57.024912   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:57.024920   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:57.024935   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:57.038279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:57.038309   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:57.111955   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:57.111985   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:57.111998   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:57.193719   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:57.193755   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:57.230058   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:57.230085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:55.568928   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:58.069462   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:55.467583   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:57.968380   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:59.969131   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:56.807753   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:58.808055   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:59.780762   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:59.794210   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:59.794277   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:59.828258   73900 cri.go:89] found id: ""
	I0930 21:09:59.828287   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.828298   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:59.828306   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:59.828369   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:59.868295   73900 cri.go:89] found id: ""
	I0930 21:09:59.868331   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.868353   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:59.868363   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:59.868437   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:59.900298   73900 cri.go:89] found id: ""
	I0930 21:09:59.900326   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.900337   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:59.900343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:59.900403   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:59.934081   73900 cri.go:89] found id: ""
	I0930 21:09:59.934108   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.934120   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:59.934127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:59.934183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:59.970564   73900 cri.go:89] found id: ""
	I0930 21:09:59.970592   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.970600   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:59.970605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:59.970652   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:00.006215   73900 cri.go:89] found id: ""
	I0930 21:10:00.006249   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.006259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:00.006270   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:00.006348   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:00.040106   73900 cri.go:89] found id: ""
	I0930 21:10:00.040135   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.040144   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:00.040150   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:00.040202   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:00.079310   73900 cri.go:89] found id: ""
	I0930 21:10:00.079345   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.079354   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:00.079365   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:00.079378   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:00.161243   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:00.161284   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:00.198911   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:00.198941   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:00.247697   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:00.247735   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:00.260905   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:00.260933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:00.332502   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:00.569218   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.569371   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.468439   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:04.968585   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:00.808753   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:03.306574   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.833204   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:02.846807   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:02.846893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:02.882386   73900 cri.go:89] found id: ""
	I0930 21:10:02.882420   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.882431   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:02.882439   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:02.882504   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:02.918589   73900 cri.go:89] found id: ""
	I0930 21:10:02.918617   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.918633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:02.918642   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:02.918722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:02.952758   73900 cri.go:89] found id: ""
	I0930 21:10:02.952789   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.952799   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:02.952806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:02.952871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:02.991406   73900 cri.go:89] found id: ""
	I0930 21:10:02.991439   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.991448   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:02.991454   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:02.991511   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:03.030075   73900 cri.go:89] found id: ""
	I0930 21:10:03.030104   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.030112   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:03.030121   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:03.030172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:03.063630   73900 cri.go:89] found id: ""
	I0930 21:10:03.063654   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.063662   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:03.063668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:03.063718   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:03.098607   73900 cri.go:89] found id: ""
	I0930 21:10:03.098636   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.098644   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:03.098649   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:03.098702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:03.133161   73900 cri.go:89] found id: ""
	I0930 21:10:03.133189   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.133198   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:03.133206   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:03.133217   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:03.211046   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:03.211083   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:03.252585   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:03.252615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:03.307019   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:03.307049   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:03.320781   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:03.320811   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:03.408645   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:05.909638   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:05.922674   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:05.922744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:05.955264   73900 cri.go:89] found id: ""
	I0930 21:10:05.955305   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.955318   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:05.955326   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:05.955378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:05.991055   73900 cri.go:89] found id: ""
	I0930 21:10:05.991100   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.991122   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:05.991130   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:05.991194   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:06.025725   73900 cri.go:89] found id: ""
	I0930 21:10:06.025755   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.025766   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:06.025773   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:06.025832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:06.067700   73900 cri.go:89] found id: ""
	I0930 21:10:06.067726   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.067736   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:06.067743   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:06.067801   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:06.102729   73900 cri.go:89] found id: ""
	I0930 21:10:06.102760   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.102771   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:06.102784   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:06.102845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:06.137120   73900 cri.go:89] found id: ""
	I0930 21:10:06.137148   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.137159   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:06.137164   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:06.137215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:06.169985   73900 cri.go:89] found id: ""
	I0930 21:10:06.170014   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.170023   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:06.170029   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:06.170082   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:06.206928   73900 cri.go:89] found id: ""
	I0930 21:10:06.206951   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.206959   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:06.206967   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:06.206977   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:06.258835   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:06.258870   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:06.273527   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:06.273556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:06.351335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:06.351359   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:06.351373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:06.423412   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:06.423450   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:04.569756   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:07.069437   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:09.074024   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:06.969500   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:09.471298   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:05.807932   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:08.306749   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:08.968986   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:08.984075   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:08.984139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:09.016815   73900 cri.go:89] found id: ""
	I0930 21:10:09.016847   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.016858   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:09.016864   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:09.016928   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:09.051603   73900 cri.go:89] found id: ""
	I0930 21:10:09.051626   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.051633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:09.051639   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:09.051693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:09.088820   73900 cri.go:89] found id: ""
	I0930 21:10:09.088856   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.088870   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:09.088884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:09.088949   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:09.124032   73900 cri.go:89] found id: ""
	I0930 21:10:09.124064   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.124076   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:09.124083   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:09.124140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:09.177129   73900 cri.go:89] found id: ""
	I0930 21:10:09.177161   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.177172   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:09.177178   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:09.177228   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:09.211490   73900 cri.go:89] found id: ""
	I0930 21:10:09.211513   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.211521   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:09.211540   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:09.211605   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:09.252187   73900 cri.go:89] found id: ""
	I0930 21:10:09.252211   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.252221   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:09.252229   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:09.252289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:09.286970   73900 cri.go:89] found id: ""
	I0930 21:10:09.287004   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.287012   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:09.287020   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:09.287031   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:09.369387   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:09.369410   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:09.369422   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:09.450685   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:09.450733   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:09.491302   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:09.491331   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:09.540183   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:09.540219   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.054793   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:12.068635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:12.068717   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:12.103118   73900 cri.go:89] found id: ""
	I0930 21:10:12.103140   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.103149   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:12.103154   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:12.103219   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:12.137992   73900 cri.go:89] found id: ""
	I0930 21:10:12.138020   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.138031   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:12.138040   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:12.138103   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:12.175559   73900 cri.go:89] found id: ""
	I0930 21:10:12.175591   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.175609   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:12.175616   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:12.175678   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:12.209630   73900 cri.go:89] found id: ""
	I0930 21:10:12.209655   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.209666   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:12.209672   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:12.209735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:12.245844   73900 cri.go:89] found id: ""
	I0930 21:10:12.245879   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.245891   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:12.245901   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:12.245961   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:12.280385   73900 cri.go:89] found id: ""
	I0930 21:10:12.280412   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.280420   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:12.280426   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:12.280484   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:12.315424   73900 cri.go:89] found id: ""
	I0930 21:10:12.315453   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.315463   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:12.315473   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:12.315566   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:12.349223   73900 cri.go:89] found id: ""
	I0930 21:10:12.349251   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.349270   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:12.349279   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:12.349291   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.362360   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:12.362397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:12.432060   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:12.432084   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:12.432101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:12.506059   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:12.506096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:12.541319   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:12.541348   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:11.568740   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:13.569690   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:11.968234   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:13.968634   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:10.306903   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:12.307072   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:14.807562   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:15.098852   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:15.111919   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:15.112001   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:15.149174   73900 cri.go:89] found id: ""
	I0930 21:10:15.149206   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.149216   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:15.149223   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:15.149286   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:15.187283   73900 cri.go:89] found id: ""
	I0930 21:10:15.187316   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.187326   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:15.187333   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:15.187392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:15.223896   73900 cri.go:89] found id: ""
	I0930 21:10:15.223922   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.223933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:15.223940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:15.224000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:15.260530   73900 cri.go:89] found id: ""
	I0930 21:10:15.260559   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.260567   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:15.260573   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:15.260634   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:15.296319   73900 cri.go:89] found id: ""
	I0930 21:10:15.296346   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.296357   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:15.296363   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:15.296425   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:15.333785   73900 cri.go:89] found id: ""
	I0930 21:10:15.333830   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.333843   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:15.333856   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:15.333932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:15.368235   73900 cri.go:89] found id: ""
	I0930 21:10:15.368268   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.368280   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:15.368288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:15.368354   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:15.408155   73900 cri.go:89] found id: ""
	I0930 21:10:15.408184   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.408192   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:15.408200   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:15.408210   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:15.462018   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:15.462058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:15.477345   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:15.477376   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:15.558398   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:15.558423   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:15.558442   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:15.662269   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:15.662311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:15.569988   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.069056   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:16.467859   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.468764   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:17.307469   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:19.809316   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.199477   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:18.213235   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:18.213320   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:18.250379   73900 cri.go:89] found id: ""
	I0930 21:10:18.250409   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.250418   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:18.250424   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:18.250515   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:18.283381   73900 cri.go:89] found id: ""
	I0930 21:10:18.283407   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.283416   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:18.283422   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:18.283482   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:18.321601   73900 cri.go:89] found id: ""
	I0930 21:10:18.321635   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.321646   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:18.321659   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:18.321720   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:18.354210   73900 cri.go:89] found id: ""
	I0930 21:10:18.354242   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.354254   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:18.354262   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:18.354330   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:18.391982   73900 cri.go:89] found id: ""
	I0930 21:10:18.392019   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.392029   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:18.392035   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:18.392150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:18.428826   73900 cri.go:89] found id: ""
	I0930 21:10:18.428851   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.428862   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:18.428870   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:18.428927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:18.465841   73900 cri.go:89] found id: ""
	I0930 21:10:18.465868   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.465878   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:18.465887   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:18.465934   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:18.502747   73900 cri.go:89] found id: ""
	I0930 21:10:18.502775   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.502783   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:18.502793   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:18.502807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:18.558025   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:18.558064   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:18.572356   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:18.572383   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:18.642994   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:18.643020   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:18.643033   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:18.722804   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:18.722845   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.262790   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:21.276427   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:21.276510   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:21.323245   73900 cri.go:89] found id: ""
	I0930 21:10:21.323274   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.323284   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:21.323291   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:21.323377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:21.381684   73900 cri.go:89] found id: ""
	I0930 21:10:21.381725   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.381736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:21.381744   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:21.381813   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:21.428818   73900 cri.go:89] found id: ""
	I0930 21:10:21.428841   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.428849   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:21.428854   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:21.428901   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:21.462906   73900 cri.go:89] found id: ""
	I0930 21:10:21.462935   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.462944   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:21.462949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:21.462995   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:21.502417   73900 cri.go:89] found id: ""
	I0930 21:10:21.502452   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.502464   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:21.502471   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:21.502535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:21.540004   73900 cri.go:89] found id: ""
	I0930 21:10:21.540037   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.540048   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:21.540056   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:21.540105   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:21.574898   73900 cri.go:89] found id: ""
	I0930 21:10:21.574929   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.574937   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:21.574942   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:21.574999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:21.609438   73900 cri.go:89] found id: ""
	I0930 21:10:21.609465   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.609473   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:21.609496   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:21.609524   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.646651   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:21.646679   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:21.702406   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:21.702451   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:21.716226   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:21.716260   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:21.790089   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:21.790115   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:21.790128   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:20.070823   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.568856   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:20.968069   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.968208   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.307376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:24.808780   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:24.368291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:24.381517   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:24.381588   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:24.416535   73900 cri.go:89] found id: ""
	I0930 21:10:24.416559   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.416570   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:24.416577   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:24.416635   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:24.454444   73900 cri.go:89] found id: ""
	I0930 21:10:24.454472   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.454480   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:24.454485   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:24.454537   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:24.492334   73900 cri.go:89] found id: ""
	I0930 21:10:24.492359   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.492367   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:24.492373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:24.492419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:24.527590   73900 cri.go:89] found id: ""
	I0930 21:10:24.527622   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.527633   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:24.527642   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:24.527708   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:24.564819   73900 cri.go:89] found id: ""
	I0930 21:10:24.564844   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.564853   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:24.564858   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:24.564915   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:24.599367   73900 cri.go:89] found id: ""
	I0930 21:10:24.599390   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.599398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:24.599403   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:24.599450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:24.636738   73900 cri.go:89] found id: ""
	I0930 21:10:24.636767   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.636778   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:24.636785   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:24.636845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:24.669607   73900 cri.go:89] found id: ""
	I0930 21:10:24.669640   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.669651   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:24.669663   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:24.669680   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:24.722662   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:24.722696   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:24.736150   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:24.736179   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:24.812022   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:24.812053   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:24.812069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.891291   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:24.891330   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.430595   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:27.443990   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:27.444054   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:27.480204   73900 cri.go:89] found id: ""
	I0930 21:10:27.480230   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.480237   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:27.480243   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:27.480297   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:27.516959   73900 cri.go:89] found id: ""
	I0930 21:10:27.516982   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.516989   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:27.516995   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:27.517041   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:27.549717   73900 cri.go:89] found id: ""
	I0930 21:10:27.549745   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.549758   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:27.549769   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:27.549821   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:27.584512   73900 cri.go:89] found id: ""
	I0930 21:10:27.584539   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.584549   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:27.584560   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:27.584619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:27.623551   73900 cri.go:89] found id: ""
	I0930 21:10:27.623586   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.623603   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:27.623612   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:27.623679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:27.662453   73900 cri.go:89] found id: ""
	I0930 21:10:27.662478   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.662486   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:27.662493   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:27.662554   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:27.695665   73900 cri.go:89] found id: ""
	I0930 21:10:27.695693   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.695701   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:27.695707   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:27.695765   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:27.729090   73900 cri.go:89] found id: ""
	I0930 21:10:27.729129   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.729137   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:27.729146   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:27.729155   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.570129   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:26.572751   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.069340   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:25.468598   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.469443   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.970417   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.307766   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.806538   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.816186   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:27.816230   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.854451   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:27.854485   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:27.905674   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:27.905709   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:27.918889   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:27.918917   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:27.989739   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.490514   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:30.502735   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:30.502810   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:30.535874   73900 cri.go:89] found id: ""
	I0930 21:10:30.535902   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.535914   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:30.535922   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:30.535989   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:30.570603   73900 cri.go:89] found id: ""
	I0930 21:10:30.570627   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.570634   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:30.570643   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:30.570689   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:30.605225   73900 cri.go:89] found id: ""
	I0930 21:10:30.605255   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.605266   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:30.605273   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:30.605333   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:30.640810   73900 cri.go:89] found id: ""
	I0930 21:10:30.640839   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.640849   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:30.640857   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:30.640914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:30.673101   73900 cri.go:89] found id: ""
	I0930 21:10:30.673129   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.673137   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:30.673142   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:30.673189   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:30.704332   73900 cri.go:89] found id: ""
	I0930 21:10:30.704356   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.704366   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:30.704373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:30.704440   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:30.738463   73900 cri.go:89] found id: ""
	I0930 21:10:30.738494   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.738506   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:30.738516   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:30.738579   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:30.772115   73900 cri.go:89] found id: ""
	I0930 21:10:30.772153   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.772164   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:30.772175   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:30.772193   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:30.850683   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.850707   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:30.850720   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:30.930674   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:30.930718   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:30.975781   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:30.975819   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:31.030566   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:31.030613   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:31.070216   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.568935   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:32.468224   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:34.968557   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:31.807408   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.807669   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.544354   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:33.557613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:33.557692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:33.594372   73900 cri.go:89] found id: ""
	I0930 21:10:33.594394   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.594401   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:33.594406   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:33.594455   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:33.632026   73900 cri.go:89] found id: ""
	I0930 21:10:33.632048   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.632056   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:33.632061   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:33.632113   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:33.666168   73900 cri.go:89] found id: ""
	I0930 21:10:33.666201   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.666213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:33.666219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:33.666269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:33.697772   73900 cri.go:89] found id: ""
	I0930 21:10:33.697801   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.697810   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:33.697816   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:33.697864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:33.732821   73900 cri.go:89] found id: ""
	I0930 21:10:33.732851   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.732862   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:33.732869   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:33.732952   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:33.770646   73900 cri.go:89] found id: ""
	I0930 21:10:33.770682   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.770693   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:33.770701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:33.770756   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:33.804803   73900 cri.go:89] found id: ""
	I0930 21:10:33.804831   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.804842   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:33.804848   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:33.804921   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:33.838455   73900 cri.go:89] found id: ""
	I0930 21:10:33.838484   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.838495   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:33.838505   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:33.838523   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:33.879785   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:33.879812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:33.934586   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:33.934623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:33.948250   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:33.948293   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:34.023021   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:34.023054   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:34.023069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:36.604173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:36.616668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:36.616735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:36.650716   73900 cri.go:89] found id: ""
	I0930 21:10:36.650748   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.650757   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:36.650767   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:36.650833   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:36.685705   73900 cri.go:89] found id: ""
	I0930 21:10:36.685739   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.685751   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:36.685758   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:36.685819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:36.719895   73900 cri.go:89] found id: ""
	I0930 21:10:36.719922   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.719932   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:36.719939   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:36.720006   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:36.753123   73900 cri.go:89] found id: ""
	I0930 21:10:36.753148   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.753159   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:36.753166   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:36.753231   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:36.790023   73900 cri.go:89] found id: ""
	I0930 21:10:36.790054   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.790066   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:36.790073   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:36.790135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:36.825280   73900 cri.go:89] found id: ""
	I0930 21:10:36.825314   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.825324   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:36.825343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:36.825411   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:36.859028   73900 cri.go:89] found id: ""
	I0930 21:10:36.859053   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.859060   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:36.859066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:36.859125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:36.894952   73900 cri.go:89] found id: ""
	I0930 21:10:36.894980   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.894988   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:36.894996   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:36.895010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:36.968214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:36.968241   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:36.968256   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:37.047866   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:37.047903   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:37.088671   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:37.088705   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:37.144014   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:37.144058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:36.068920   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:38.069544   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:36.969475   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:39.469207   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:35.808654   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:38.306701   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:39.657874   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:39.671042   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:39.671100   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:39.706210   73900 cri.go:89] found id: ""
	I0930 21:10:39.706235   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.706243   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:39.706248   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:39.706295   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:39.743194   73900 cri.go:89] found id: ""
	I0930 21:10:39.743218   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.743226   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:39.743232   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:39.743280   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:39.780681   73900 cri.go:89] found id: ""
	I0930 21:10:39.780707   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.780715   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:39.780720   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:39.780774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:39.815841   73900 cri.go:89] found id: ""
	I0930 21:10:39.815865   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.815874   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:39.815879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:39.815933   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:39.849497   73900 cri.go:89] found id: ""
	I0930 21:10:39.849523   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.849534   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:39.849541   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:39.849603   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:39.883476   73900 cri.go:89] found id: ""
	I0930 21:10:39.883507   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.883519   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:39.883562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:39.883633   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:39.918300   73900 cri.go:89] found id: ""
	I0930 21:10:39.918329   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.918338   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:39.918343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:39.918392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:39.955751   73900 cri.go:89] found id: ""
	I0930 21:10:39.955780   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.955788   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:39.955795   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:39.955807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:40.010994   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:40.011035   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:40.025992   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:40.026022   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:40.097709   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:40.097731   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:40.097748   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:40.176790   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:40.176824   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:42.713838   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:42.729806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:42.729885   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:40.070503   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.568444   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:41.968357   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:44.469223   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:40.308072   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.807489   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.765449   73900 cri.go:89] found id: ""
	I0930 21:10:42.765483   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.765491   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:42.765498   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:42.765555   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:42.802556   73900 cri.go:89] found id: ""
	I0930 21:10:42.802584   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.802604   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:42.802612   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:42.802693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:42.836537   73900 cri.go:89] found id: ""
	I0930 21:10:42.836568   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.836585   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:42.836598   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:42.836662   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:42.870475   73900 cri.go:89] found id: ""
	I0930 21:10:42.870503   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.870511   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:42.870526   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:42.870589   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:42.907061   73900 cri.go:89] found id: ""
	I0930 21:10:42.907090   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.907098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:42.907103   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:42.907153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:42.941607   73900 cri.go:89] found id: ""
	I0930 21:10:42.941632   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.941640   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:42.941646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:42.941701   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:42.977073   73900 cri.go:89] found id: ""
	I0930 21:10:42.977097   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.977105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:42.977111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:42.977159   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:43.010838   73900 cri.go:89] found id: ""
	I0930 21:10:43.010859   73900 logs.go:276] 0 containers: []
	W0930 21:10:43.010867   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:43.010875   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:43.010886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:43.061264   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:43.061299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:43.075917   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:43.075950   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:43.137088   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:43.137111   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:43.137126   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:43.219393   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:43.219440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:45.761752   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:45.775864   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:45.775942   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:45.810693   73900 cri.go:89] found id: ""
	I0930 21:10:45.810724   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.810734   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:45.810740   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:45.810797   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:45.848360   73900 cri.go:89] found id: ""
	I0930 21:10:45.848399   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.848410   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:45.848418   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:45.848475   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:45.885504   73900 cri.go:89] found id: ""
	I0930 21:10:45.885550   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.885560   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:45.885565   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:45.885616   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:45.919747   73900 cri.go:89] found id: ""
	I0930 21:10:45.919776   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.919784   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:45.919789   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:45.919843   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:45.953787   73900 cri.go:89] found id: ""
	I0930 21:10:45.953820   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.953831   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:45.953839   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:45.953893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:45.990145   73900 cri.go:89] found id: ""
	I0930 21:10:45.990174   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.990184   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:45.990192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:45.990253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:46.023359   73900 cri.go:89] found id: ""
	I0930 21:10:46.023383   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.023391   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:46.023396   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:46.023447   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:46.057460   73900 cri.go:89] found id: ""
	I0930 21:10:46.057493   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.057504   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:46.057514   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:46.057533   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:46.097082   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:46.097109   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:46.147921   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:46.147960   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:46.161204   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:46.161232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:46.224308   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:46.224336   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:46.224351   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:44.568918   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:46.569353   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.569656   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:46.967674   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.967998   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:45.306917   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:47.806333   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:49.807846   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.805668   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:48.818569   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:48.818663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:48.856783   73900 cri.go:89] found id: ""
	I0930 21:10:48.856815   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.856827   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:48.856834   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:48.856896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:48.889185   73900 cri.go:89] found id: ""
	I0930 21:10:48.889217   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.889229   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:48.889236   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:48.889306   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:48.922013   73900 cri.go:89] found id: ""
	I0930 21:10:48.922041   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.922050   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:48.922055   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:48.922107   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:48.956818   73900 cri.go:89] found id: ""
	I0930 21:10:48.956848   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.956858   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:48.956866   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:48.956929   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:48.994942   73900 cri.go:89] found id: ""
	I0930 21:10:48.994975   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.994985   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:48.994991   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:48.995052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:49.031448   73900 cri.go:89] found id: ""
	I0930 21:10:49.031479   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.031491   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:49.031500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:49.031583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:49.066570   73900 cri.go:89] found id: ""
	I0930 21:10:49.066600   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.066608   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:49.066613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:49.066658   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:49.100952   73900 cri.go:89] found id: ""
	I0930 21:10:49.100981   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.100992   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:49.101000   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:49.101010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:49.176423   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:49.176458   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:49.212358   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:49.212387   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:49.263177   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:49.263227   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:49.275940   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:49.275969   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:49.346915   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:51.847761   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:51.860571   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:51.860646   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:51.894863   73900 cri.go:89] found id: ""
	I0930 21:10:51.894896   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.894906   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:51.894914   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:51.894978   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:51.927977   73900 cri.go:89] found id: ""
	I0930 21:10:51.928007   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.928018   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:51.928025   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:51.928083   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:51.962894   73900 cri.go:89] found id: ""
	I0930 21:10:51.962924   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.962933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:51.962940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:51.962999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:51.998453   73900 cri.go:89] found id: ""
	I0930 21:10:51.998482   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.998493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:51.998500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:51.998562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:52.033039   73900 cri.go:89] found id: ""
	I0930 21:10:52.033066   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.033075   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:52.033080   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:52.033139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:52.067222   73900 cri.go:89] found id: ""
	I0930 21:10:52.067254   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.067267   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:52.067274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:52.067341   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:52.102414   73900 cri.go:89] found id: ""
	I0930 21:10:52.102439   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.102448   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:52.102453   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:52.102498   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:52.135175   73900 cri.go:89] found id: ""
	I0930 21:10:52.135204   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.135214   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:52.135225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:52.135239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:52.185736   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:52.185779   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:52.198756   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:52.198792   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:52.264816   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:52.264847   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:52.264859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:52.347189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:52.347229   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:50.569765   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:53.068745   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:50.968885   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:52.970855   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:52.307245   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:54.308516   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:54.887502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:54.900067   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:54.900153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:54.939214   73900 cri.go:89] found id: ""
	I0930 21:10:54.939241   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.939249   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:54.939259   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:54.939313   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:54.973451   73900 cri.go:89] found id: ""
	I0930 21:10:54.973475   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.973483   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:54.973488   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:54.973541   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:55.007815   73900 cri.go:89] found id: ""
	I0930 21:10:55.007841   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.007850   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:55.007855   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:55.007914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:55.040861   73900 cri.go:89] found id: ""
	I0930 21:10:55.040891   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.040899   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:55.040905   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:55.040957   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:55.076053   73900 cri.go:89] found id: ""
	I0930 21:10:55.076086   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.076098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:55.076111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:55.076172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:55.108768   73900 cri.go:89] found id: ""
	I0930 21:10:55.108797   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.108807   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:55.108814   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:55.108879   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:55.155283   73900 cri.go:89] found id: ""
	I0930 21:10:55.155316   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.155331   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:55.155338   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:55.155398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:55.189370   73900 cri.go:89] found id: ""
	I0930 21:10:55.189399   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.189408   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:55.189416   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:55.189432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:55.243067   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:55.243101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:55.257021   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:55.257051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:55.329381   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:55.329408   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:55.329423   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:55.405691   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:55.405762   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:55.069901   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.568914   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:55.468489   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.977733   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:56.806381   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:58.806880   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.957380   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:57.971160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:57.971245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:58.004401   73900 cri.go:89] found id: ""
	I0930 21:10:58.004446   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.004457   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:58.004465   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:58.004524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:58.038954   73900 cri.go:89] found id: ""
	I0930 21:10:58.038978   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.038986   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:58.038991   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:58.039036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:58.072801   73900 cri.go:89] found id: ""
	I0930 21:10:58.072830   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.072842   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:58.072849   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:58.072909   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:58.104908   73900 cri.go:89] found id: ""
	I0930 21:10:58.104936   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.104946   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:58.104953   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:58.105014   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:58.139693   73900 cri.go:89] found id: ""
	I0930 21:10:58.139725   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.139735   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:58.139741   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:58.139795   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:58.174149   73900 cri.go:89] found id: ""
	I0930 21:10:58.174180   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.174192   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:58.174199   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:58.174275   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:58.206067   73900 cri.go:89] found id: ""
	I0930 21:10:58.206094   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.206105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:58.206112   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:58.206167   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:58.240613   73900 cri.go:89] found id: ""
	I0930 21:10:58.240645   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.240653   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:58.240661   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:58.240674   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:58.306061   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:58.306086   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:58.306100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:58.386030   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:58.386073   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:58.425526   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:58.425562   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:58.483364   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:58.483409   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:00.998086   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:01.011934   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:01.012015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:01.047923   73900 cri.go:89] found id: ""
	I0930 21:11:01.047951   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.047960   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:01.047966   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:01.048024   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:01.082126   73900 cri.go:89] found id: ""
	I0930 21:11:01.082159   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.082170   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:01.082176   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:01.082224   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:01.117746   73900 cri.go:89] found id: ""
	I0930 21:11:01.117775   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.117787   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:01.117794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:01.117853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:01.153034   73900 cri.go:89] found id: ""
	I0930 21:11:01.153059   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.153067   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:01.153072   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:01.153128   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:01.188102   73900 cri.go:89] found id: ""
	I0930 21:11:01.188125   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.188133   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:01.188139   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:01.188193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:01.222120   73900 cri.go:89] found id: ""
	I0930 21:11:01.222147   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.222155   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:01.222161   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:01.222215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:01.258899   73900 cri.go:89] found id: ""
	I0930 21:11:01.258929   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.258941   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:01.258949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:01.259008   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:01.295473   73900 cri.go:89] found id: ""
	I0930 21:11:01.295504   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.295512   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:01.295521   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:01.295551   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:01.349134   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:01.349181   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:01.363113   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:01.363147   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:01.436589   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:01.436609   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:01.436622   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:01.516384   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:01.516420   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:00.069406   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:02.568203   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:00.468104   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:02.968911   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:00.807318   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:03.307184   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:04.075114   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:04.089300   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:04.089375   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:04.124385   73900 cri.go:89] found id: ""
	I0930 21:11:04.124411   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.124419   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:04.124425   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:04.124491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:04.158326   73900 cri.go:89] found id: ""
	I0930 21:11:04.158359   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.158367   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:04.158372   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:04.158419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:04.193477   73900 cri.go:89] found id: ""
	I0930 21:11:04.193507   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.193516   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:04.193521   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:04.193577   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:04.231697   73900 cri.go:89] found id: ""
	I0930 21:11:04.231723   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.231731   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:04.231737   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:04.231805   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:04.265879   73900 cri.go:89] found id: ""
	I0930 21:11:04.265903   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.265910   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:04.265915   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:04.265960   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:04.301382   73900 cri.go:89] found id: ""
	I0930 21:11:04.301421   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.301432   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:04.301440   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:04.301505   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:04.337496   73900 cri.go:89] found id: ""
	I0930 21:11:04.337521   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.337529   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:04.337534   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:04.337584   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:04.372631   73900 cri.go:89] found id: ""
	I0930 21:11:04.372665   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.372677   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:04.372700   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:04.372715   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:04.385279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:04.385311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:04.456700   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:04.456721   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:04.456732   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:04.537892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:04.537933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.574919   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:04.574947   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.128733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:07.142625   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:07.142687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:07.177450   73900 cri.go:89] found id: ""
	I0930 21:11:07.177475   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.177483   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:07.177488   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:07.177536   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:07.210158   73900 cri.go:89] found id: ""
	I0930 21:11:07.210184   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.210192   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:07.210197   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:07.210256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:07.242623   73900 cri.go:89] found id: ""
	I0930 21:11:07.242648   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.242656   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:07.242661   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:07.242705   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:07.277779   73900 cri.go:89] found id: ""
	I0930 21:11:07.277810   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.277821   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:07.277827   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:07.277881   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:07.316232   73900 cri.go:89] found id: ""
	I0930 21:11:07.316257   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.316263   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:07.316269   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:07.316326   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:07.360277   73900 cri.go:89] found id: ""
	I0930 21:11:07.360311   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.360322   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:07.360329   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:07.360391   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:07.412146   73900 cri.go:89] found id: ""
	I0930 21:11:07.412171   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.412181   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:07.412187   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:07.412247   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:07.447179   73900 cri.go:89] found id: ""
	I0930 21:11:07.447209   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.447217   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:07.447225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:07.447235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.496304   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:07.496340   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:07.510332   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:07.510373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:07.581335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:07.581375   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:07.581393   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:07.664522   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:07.664558   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.568787   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.069201   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:09.070583   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:05.468251   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.970913   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:05.308084   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.807712   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.201145   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:10.213605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:10.213663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:10.247875   73900 cri.go:89] found id: ""
	I0930 21:11:10.247904   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.247913   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:10.247918   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:10.247966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:10.280855   73900 cri.go:89] found id: ""
	I0930 21:11:10.280889   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.280900   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:10.280907   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:10.280967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:10.315638   73900 cri.go:89] found id: ""
	I0930 21:11:10.315661   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.315669   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:10.315675   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:10.315722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:10.357059   73900 cri.go:89] found id: ""
	I0930 21:11:10.357086   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.357094   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:10.357100   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:10.357154   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:10.389969   73900 cri.go:89] found id: ""
	I0930 21:11:10.389997   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.390004   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:10.390009   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:10.390060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:10.424424   73900 cri.go:89] found id: ""
	I0930 21:11:10.424454   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.424463   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:10.424469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:10.424533   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:10.457608   73900 cri.go:89] found id: ""
	I0930 21:11:10.457638   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.457650   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:10.457657   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:10.457712   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:10.490215   73900 cri.go:89] found id: ""
	I0930 21:11:10.490244   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.490253   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:10.490263   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:10.490278   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:10.554787   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:10.554814   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:10.554829   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:10.632428   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:10.632464   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:10.671018   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:10.671054   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:10.721187   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:10.721228   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:11.568643   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:13.568765   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.469296   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:12.968274   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.307487   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:12.307960   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:14.808087   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:13.234687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:13.250680   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:13.250778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:13.312468   73900 cri.go:89] found id: ""
	I0930 21:11:13.312499   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.312509   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:13.312516   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:13.312578   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:13.367051   73900 cri.go:89] found id: ""
	I0930 21:11:13.367073   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.367084   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:13.367091   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:13.367149   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:13.403019   73900 cri.go:89] found id: ""
	I0930 21:11:13.403055   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.403066   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:13.403074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:13.403135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:13.436942   73900 cri.go:89] found id: ""
	I0930 21:11:13.436967   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.436975   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:13.436981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:13.437047   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:13.470491   73900 cri.go:89] found id: ""
	I0930 21:11:13.470515   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.470523   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:13.470528   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:13.470619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:13.504078   73900 cri.go:89] found id: ""
	I0930 21:11:13.504112   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.504121   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:13.504127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:13.504201   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:13.536245   73900 cri.go:89] found id: ""
	I0930 21:11:13.536271   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.536292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:13.536297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:13.536357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:13.570794   73900 cri.go:89] found id: ""
	I0930 21:11:13.570817   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.570827   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:13.570836   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:13.570850   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:13.647919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:13.647941   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:13.647956   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:13.726113   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:13.726150   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:13.767916   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:13.767942   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:13.826362   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:13.826402   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.341252   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:16.354259   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:16.354344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:16.388627   73900 cri.go:89] found id: ""
	I0930 21:11:16.388650   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.388658   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:16.388663   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:16.388714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:16.424848   73900 cri.go:89] found id: ""
	I0930 21:11:16.424871   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.424878   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:16.424883   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:16.424941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:16.460604   73900 cri.go:89] found id: ""
	I0930 21:11:16.460626   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.460635   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:16.460640   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:16.460688   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:16.495908   73900 cri.go:89] found id: ""
	I0930 21:11:16.495932   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.495940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:16.495946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:16.496000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:16.531758   73900 cri.go:89] found id: ""
	I0930 21:11:16.531782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.531790   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:16.531796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:16.531853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:16.566756   73900 cri.go:89] found id: ""
	I0930 21:11:16.566782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.566792   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:16.566799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:16.566864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:16.601978   73900 cri.go:89] found id: ""
	I0930 21:11:16.602005   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.602012   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:16.602022   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:16.602081   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:16.636009   73900 cri.go:89] found id: ""
	I0930 21:11:16.636044   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.636056   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:16.636066   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:16.636079   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:16.688750   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:16.688786   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.702364   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:16.702404   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:16.767119   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:16.767175   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:16.767188   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:16.842052   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:16.842095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:15.571440   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:18.068441   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:15.469030   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:17.970779   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:17.307424   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:19.807193   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:19.380570   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:19.394687   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:19.394816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:19.427087   73900 cri.go:89] found id: ""
	I0930 21:11:19.427116   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.427124   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:19.427129   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:19.427178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:19.461074   73900 cri.go:89] found id: ""
	I0930 21:11:19.461098   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.461108   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:19.461122   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:19.461183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:19.494850   73900 cri.go:89] found id: ""
	I0930 21:11:19.494872   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.494880   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:19.494885   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:19.494943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:19.533448   73900 cri.go:89] found id: ""
	I0930 21:11:19.533480   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.533493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:19.533500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:19.533562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:19.569250   73900 cri.go:89] found id: ""
	I0930 21:11:19.569280   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.569291   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:19.569298   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:19.569383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:19.603182   73900 cri.go:89] found id: ""
	I0930 21:11:19.603206   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.603213   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:19.603219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:19.603268   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:19.637411   73900 cri.go:89] found id: ""
	I0930 21:11:19.637433   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.637441   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:19.637447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:19.637500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:19.672789   73900 cri.go:89] found id: ""
	I0930 21:11:19.672821   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.672831   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:19.672841   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:19.672854   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:19.755002   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:19.755039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:19.796499   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:19.796536   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:19.847235   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:19.847272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:19.861007   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:19.861032   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:19.931214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.431506   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:22.446129   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:22.446199   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:22.484093   73900 cri.go:89] found id: ""
	I0930 21:11:22.484119   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.484126   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:22.484132   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:22.484183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:22.516949   73900 cri.go:89] found id: ""
	I0930 21:11:22.516986   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.516994   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:22.517001   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:22.517056   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:22.550848   73900 cri.go:89] found id: ""
	I0930 21:11:22.550883   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.550898   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:22.550906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:22.550966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:22.586459   73900 cri.go:89] found id: ""
	I0930 21:11:22.586490   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.586498   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:22.586505   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:22.586627   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:22.620538   73900 cri.go:89] found id: ""
	I0930 21:11:22.620566   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.620578   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:22.620586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:22.620651   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:22.658256   73900 cri.go:89] found id: ""
	I0930 21:11:22.658279   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.658287   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:22.658292   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:22.658352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:22.690316   73900 cri.go:89] found id: ""
	I0930 21:11:22.690349   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.690365   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:22.690371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:22.690431   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:22.724234   73900 cri.go:89] found id: ""
	I0930 21:11:22.724264   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.724275   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:22.724285   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:22.724299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:20.570198   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:23.072974   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:20.468122   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.968686   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.307398   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:24.806972   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.777460   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:22.777503   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:22.790850   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:22.790879   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:22.866058   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.866079   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:22.866095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:22.947447   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:22.947488   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:25.486733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:25.499906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:25.499976   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:25.533819   73900 cri.go:89] found id: ""
	I0930 21:11:25.533842   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.533850   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:25.533857   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:25.533906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:25.568037   73900 cri.go:89] found id: ""
	I0930 21:11:25.568059   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.568066   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:25.568071   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:25.568129   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:25.601784   73900 cri.go:89] found id: ""
	I0930 21:11:25.601811   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.601819   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:25.601824   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:25.601876   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:25.638048   73900 cri.go:89] found id: ""
	I0930 21:11:25.638070   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.638078   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:25.638084   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:25.638140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:25.669946   73900 cri.go:89] found id: ""
	I0930 21:11:25.669968   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.669976   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:25.669981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:25.670028   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:25.701928   73900 cri.go:89] found id: ""
	I0930 21:11:25.701953   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.701961   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:25.701967   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:25.702025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:25.744295   73900 cri.go:89] found id: ""
	I0930 21:11:25.744327   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.744335   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:25.744341   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:25.744398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:25.780175   73900 cri.go:89] found id: ""
	I0930 21:11:25.780205   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.780213   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:25.780221   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:25.780232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:25.828774   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:25.828812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:25.842624   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:25.842649   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:25.916408   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:25.916451   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:25.916469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:25.997896   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:25.997932   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:25.570148   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:28.068628   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:25.467356   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:27.467782   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:29.467936   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:27.306939   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:29.807156   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:28.540994   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:28.553841   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:28.553904   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:28.588718   73900 cri.go:89] found id: ""
	I0930 21:11:28.588745   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.588754   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:28.588763   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:28.588809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:28.636210   73900 cri.go:89] found id: ""
	I0930 21:11:28.636237   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.636245   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:28.636250   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:28.636312   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:28.668714   73900 cri.go:89] found id: ""
	I0930 21:11:28.668743   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.668751   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:28.668757   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:28.668804   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:28.700413   73900 cri.go:89] found id: ""
	I0930 21:11:28.700449   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.700462   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:28.700469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:28.700522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:28.733409   73900 cri.go:89] found id: ""
	I0930 21:11:28.733433   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.733441   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:28.733446   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:28.733494   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:28.766917   73900 cri.go:89] found id: ""
	I0930 21:11:28.766957   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.766970   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:28.766979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:28.767046   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:28.801759   73900 cri.go:89] found id: ""
	I0930 21:11:28.801788   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.801798   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:28.801805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:28.801851   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:28.840724   73900 cri.go:89] found id: ""
	I0930 21:11:28.840761   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.840770   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:28.840790   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:28.840805   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:28.854426   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:28.854465   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:28.926650   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:28.926675   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:28.926690   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:29.005513   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:29.005569   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:29.047077   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:29.047102   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.603193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:31.615563   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:31.615631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:31.647656   73900 cri.go:89] found id: ""
	I0930 21:11:31.647685   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.647693   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:31.647699   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:31.647748   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:31.680004   73900 cri.go:89] found id: ""
	I0930 21:11:31.680037   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.680048   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:31.680056   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:31.680120   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:31.712562   73900 cri.go:89] found id: ""
	I0930 21:11:31.712588   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.712596   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:31.712602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:31.712650   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:31.747692   73900 cri.go:89] found id: ""
	I0930 21:11:31.747724   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.747732   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:31.747738   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:31.747803   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:31.781441   73900 cri.go:89] found id: ""
	I0930 21:11:31.781464   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.781472   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:31.781478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:31.781532   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:31.822227   73900 cri.go:89] found id: ""
	I0930 21:11:31.822252   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.822259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:31.822265   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:31.822322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:31.856531   73900 cri.go:89] found id: ""
	I0930 21:11:31.856555   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.856563   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:31.856568   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:31.856631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:31.894562   73900 cri.go:89] found id: ""
	I0930 21:11:31.894585   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.894593   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:31.894602   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:31.894618   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.946233   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:31.946271   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:31.960713   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:31.960744   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:32.036479   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:32.036497   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:32.036509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:32.111442   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:32.111477   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:30.068975   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:32.069794   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:31.468374   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:33.468986   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:31.809169   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:34.307372   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:34.651545   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:34.664058   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:34.664121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:34.697506   73900 cri.go:89] found id: ""
	I0930 21:11:34.697530   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.697539   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:34.697545   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:34.697599   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:34.730297   73900 cri.go:89] found id: ""
	I0930 21:11:34.730326   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.730334   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:34.730339   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:34.730390   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:34.762251   73900 cri.go:89] found id: ""
	I0930 21:11:34.762278   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.762286   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:34.762291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:34.762358   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:34.803028   73900 cri.go:89] found id: ""
	I0930 21:11:34.803058   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.803068   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:34.803074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:34.803122   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:34.840063   73900 cri.go:89] found id: ""
	I0930 21:11:34.840097   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.840110   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:34.840118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:34.840192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:34.878641   73900 cri.go:89] found id: ""
	I0930 21:11:34.878675   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.878686   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:34.878693   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:34.878745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:34.910799   73900 cri.go:89] found id: ""
	I0930 21:11:34.910823   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.910830   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:34.910837   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:34.910899   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:34.947748   73900 cri.go:89] found id: ""
	I0930 21:11:34.947782   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.947795   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:34.947806   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:34.947821   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:35.026490   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:35.026514   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:35.026529   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:35.115504   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:35.115559   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:35.158629   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:35.158659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:35.211011   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:35.211052   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:37.726260   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:37.739137   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:37.739222   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:34.568166   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:36.569720   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:39.069371   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:35.968574   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:38.467872   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:36.807057   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:38.807376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:37.779980   73900 cri.go:89] found id: ""
	I0930 21:11:37.780009   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.780018   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:37.780024   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:37.780076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:37.813936   73900 cri.go:89] found id: ""
	I0930 21:11:37.813961   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.813969   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:37.813975   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:37.814021   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:37.851150   73900 cri.go:89] found id: ""
	I0930 21:11:37.851176   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.851186   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:37.851193   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:37.851256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:37.891855   73900 cri.go:89] found id: ""
	I0930 21:11:37.891881   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.891889   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:37.891894   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:37.891943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:37.929234   73900 cri.go:89] found id: ""
	I0930 21:11:37.929269   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.929281   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:37.929288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:37.929359   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:37.962350   73900 cri.go:89] found id: ""
	I0930 21:11:37.962378   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.962386   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:37.962391   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:37.962441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:37.996727   73900 cri.go:89] found id: ""
	I0930 21:11:37.996752   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.996760   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:37.996765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:37.996819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:38.029959   73900 cri.go:89] found id: ""
	I0930 21:11:38.029991   73900 logs.go:276] 0 containers: []
	W0930 21:11:38.029999   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:38.030008   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:38.030019   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:38.079836   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:38.079875   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:38.093208   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:38.093236   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:38.168839   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:38.168862   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:38.168873   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:38.244747   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:38.244783   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:40.788841   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:40.802419   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:40.802491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:40.837138   73900 cri.go:89] found id: ""
	I0930 21:11:40.837175   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.837186   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:40.837193   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:40.837255   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:40.870947   73900 cri.go:89] found id: ""
	I0930 21:11:40.870977   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.870987   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:40.870993   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:40.871040   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:40.905004   73900 cri.go:89] found id: ""
	I0930 21:11:40.905033   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.905046   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:40.905053   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:40.905104   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:40.936909   73900 cri.go:89] found id: ""
	I0930 21:11:40.936937   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.936945   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:40.936952   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:40.937015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:40.972601   73900 cri.go:89] found id: ""
	I0930 21:11:40.972630   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.972641   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:40.972646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:40.972704   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:41.007539   73900 cri.go:89] found id: ""
	I0930 21:11:41.007583   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.007594   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:41.007602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:41.007661   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:41.042049   73900 cri.go:89] found id: ""
	I0930 21:11:41.042075   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.042084   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:41.042091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:41.042153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:41.075313   73900 cri.go:89] found id: ""
	I0930 21:11:41.075398   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.075414   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:41.075424   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:41.075440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:41.128683   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:41.128726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:41.142533   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:41.142560   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:41.210149   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:41.210176   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:41.210191   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:41.286547   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:41.286590   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:41.070042   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.570819   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:40.969912   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.468434   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:40.808294   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.307628   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.828902   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:43.842047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:43.842127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:43.876147   73900 cri.go:89] found id: ""
	I0930 21:11:43.876177   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.876187   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:43.876194   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:43.876287   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:43.916351   73900 cri.go:89] found id: ""
	I0930 21:11:43.916383   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.916394   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:43.916404   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:43.916457   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:43.948853   73900 cri.go:89] found id: ""
	I0930 21:11:43.948883   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.948894   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:43.948900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:43.948967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:43.983525   73900 cri.go:89] found id: ""
	I0930 21:11:43.983577   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.983589   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:43.983597   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:43.983656   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:44.021560   73900 cri.go:89] found id: ""
	I0930 21:11:44.021594   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.021606   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:44.021614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:44.021684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:44.057307   73900 cri.go:89] found id: ""
	I0930 21:11:44.057342   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.057353   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:44.057361   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:44.057418   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:44.091120   73900 cri.go:89] found id: ""
	I0930 21:11:44.091145   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.091155   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:44.091162   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:44.091223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:44.125781   73900 cri.go:89] found id: ""
	I0930 21:11:44.125808   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.125817   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:44.125827   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:44.125842   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:44.138699   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:44.138726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:44.208976   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:44.209009   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:44.209026   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:44.285552   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:44.285593   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:44.323412   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:44.323449   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:46.875210   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:46.888532   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:46.888596   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:46.921260   73900 cri.go:89] found id: ""
	I0930 21:11:46.921285   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.921293   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:46.921299   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:46.921357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:46.954645   73900 cri.go:89] found id: ""
	I0930 21:11:46.954675   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.954683   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:46.954688   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:46.954749   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:46.988424   73900 cri.go:89] found id: ""
	I0930 21:11:46.988457   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.988468   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:46.988475   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:46.988535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:47.022635   73900 cri.go:89] found id: ""
	I0930 21:11:47.022664   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.022675   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:47.022682   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:47.022744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:47.056497   73900 cri.go:89] found id: ""
	I0930 21:11:47.056523   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.056530   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:47.056536   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:47.056595   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:47.094983   73900 cri.go:89] found id: ""
	I0930 21:11:47.095011   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.095021   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:47.095028   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:47.095097   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:47.147567   73900 cri.go:89] found id: ""
	I0930 21:11:47.147595   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.147606   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:47.147613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:47.147692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:47.184878   73900 cri.go:89] found id: ""
	I0930 21:11:47.184908   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.184919   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:47.184930   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:47.184943   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:47.258581   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:47.258615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:47.303068   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:47.303100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:47.358749   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:47.358789   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:47.372492   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:47.372531   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:47.443984   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:46.069421   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:48.569013   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:45.968422   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:47.968876   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:45.808341   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:48.306627   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:49.944644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:49.958045   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:49.958124   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:49.993053   73900 cri.go:89] found id: ""
	I0930 21:11:49.993088   73900 logs.go:276] 0 containers: []
	W0930 21:11:49.993100   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:49.993107   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:49.993168   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:50.026171   73900 cri.go:89] found id: ""
	I0930 21:11:50.026197   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.026205   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:50.026210   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:50.026269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:50.060462   73900 cri.go:89] found id: ""
	I0930 21:11:50.060492   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.060502   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:50.060509   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:50.060567   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:50.095385   73900 cri.go:89] found id: ""
	I0930 21:11:50.095414   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.095425   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:50.095432   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:50.095507   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:50.127275   73900 cri.go:89] found id: ""
	I0930 21:11:50.127300   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.127308   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:50.127318   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:50.127378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:50.159810   73900 cri.go:89] found id: ""
	I0930 21:11:50.159836   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.159845   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:50.159850   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:50.159906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:50.191651   73900 cri.go:89] found id: ""
	I0930 21:11:50.191684   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.191695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:50.191702   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:50.191774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:50.225772   73900 cri.go:89] found id: ""
	I0930 21:11:50.225799   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.225809   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:50.225819   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:50.225837   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:50.310189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:50.310223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:50.348934   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:50.348965   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:50.400666   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:50.400703   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:50.415810   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:50.415843   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:50.483773   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:51.069928   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:53.070065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:50.469516   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.968367   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:54.968624   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:50.307903   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.807610   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.984701   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:52.997669   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:52.997745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:53.034012   73900 cri.go:89] found id: ""
	I0930 21:11:53.034044   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.034055   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:53.034063   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:53.034121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:53.068192   73900 cri.go:89] found id: ""
	I0930 21:11:53.068215   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.068222   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:53.068228   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:53.068285   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:53.104683   73900 cri.go:89] found id: ""
	I0930 21:11:53.104710   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.104719   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:53.104724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:53.104778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:53.138713   73900 cri.go:89] found id: ""
	I0930 21:11:53.138745   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.138753   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:53.138759   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:53.138814   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:53.173955   73900 cri.go:89] found id: ""
	I0930 21:11:53.173982   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.173994   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:53.174001   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:53.174060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:53.205942   73900 cri.go:89] found id: ""
	I0930 21:11:53.205970   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.205980   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:53.205987   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:53.206052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:53.241739   73900 cri.go:89] found id: ""
	I0930 21:11:53.241767   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.241776   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:53.241782   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:53.241832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:53.275328   73900 cri.go:89] found id: ""
	I0930 21:11:53.275363   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.275372   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:53.275381   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:53.275397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:53.313732   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:53.313761   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:53.364974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:53.365011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:53.377970   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:53.377999   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:53.445341   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:53.445370   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:53.445388   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.025958   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:56.038367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:56.038434   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:56.074721   73900 cri.go:89] found id: ""
	I0930 21:11:56.074756   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.074767   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:56.074781   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:56.074846   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:56.111491   73900 cri.go:89] found id: ""
	I0930 21:11:56.111525   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.111550   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:56.111572   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:56.111626   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:56.145660   73900 cri.go:89] found id: ""
	I0930 21:11:56.145690   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.145701   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:56.145708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:56.145769   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:56.180865   73900 cri.go:89] found id: ""
	I0930 21:11:56.180891   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.180901   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:56.180908   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:56.180971   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:56.213681   73900 cri.go:89] found id: ""
	I0930 21:11:56.213707   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.213716   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:56.213721   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:56.213772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:56.246683   73900 cri.go:89] found id: ""
	I0930 21:11:56.246711   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.246719   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:56.246724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:56.246774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:56.279651   73900 cri.go:89] found id: ""
	I0930 21:11:56.279679   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.279687   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:56.279692   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:56.279746   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:56.316701   73900 cri.go:89] found id: ""
	I0930 21:11:56.316727   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.316735   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:56.316743   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:56.316753   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:56.329879   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:56.329905   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:56.399919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:56.399949   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:56.399964   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.480200   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:56.480237   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:56.517755   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:56.517782   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:55.568782   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:58.068718   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:57.468492   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.968123   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:55.307809   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:57.308095   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.807355   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.070677   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:59.085884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:59.085956   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:59.119580   73900 cri.go:89] found id: ""
	I0930 21:11:59.119606   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.119615   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:59.119621   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:59.119667   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:59.152087   73900 cri.go:89] found id: ""
	I0930 21:11:59.152111   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.152120   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:59.152127   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:59.152172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:59.186177   73900 cri.go:89] found id: ""
	I0930 21:11:59.186205   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.186213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:59.186220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:59.186276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:59.218800   73900 cri.go:89] found id: ""
	I0930 21:11:59.218821   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.218829   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:59.218835   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:59.218893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:59.254335   73900 cri.go:89] found id: ""
	I0930 21:11:59.254361   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.254372   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:59.254378   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:59.254432   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:59.292406   73900 cri.go:89] found id: ""
	I0930 21:11:59.292441   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.292453   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:59.292460   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:59.292522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:59.333352   73900 cri.go:89] found id: ""
	I0930 21:11:59.333388   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.333399   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:59.333406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:59.333481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:59.377031   73900 cri.go:89] found id: ""
	I0930 21:11:59.377056   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.377064   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:59.377072   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:59.377084   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:59.392626   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:59.392655   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:59.473714   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:59.473741   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:59.473754   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:59.548895   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:59.548931   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:59.589007   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:59.589039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.139243   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:02.152335   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:02.152415   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:02.186942   73900 cri.go:89] found id: ""
	I0930 21:12:02.186980   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.186991   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:02.186999   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:02.187061   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:02.219738   73900 cri.go:89] found id: ""
	I0930 21:12:02.219759   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.219768   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:02.219773   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:02.219820   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:02.253667   73900 cri.go:89] found id: ""
	I0930 21:12:02.253698   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.253707   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:02.253712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:02.253760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:02.290078   73900 cri.go:89] found id: ""
	I0930 21:12:02.290105   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.290115   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:02.290122   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:02.290182   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:02.326408   73900 cri.go:89] found id: ""
	I0930 21:12:02.326436   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.326448   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:02.326455   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:02.326509   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:02.360608   73900 cri.go:89] found id: ""
	I0930 21:12:02.360641   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.360649   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:02.360655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:02.360714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:02.396140   73900 cri.go:89] found id: ""
	I0930 21:12:02.396166   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.396176   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:02.396182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:02.396236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:02.429905   73900 cri.go:89] found id: ""
	I0930 21:12:02.429947   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.429958   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:02.429968   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:02.429986   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:02.506600   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:02.506645   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:02.549325   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:02.549354   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.603614   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:02.603659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:02.618832   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:02.618859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:02.692491   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:00.070569   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:02.569436   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:01.968240   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:04.468583   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:02.306973   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:04.308182   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:05.193131   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:05.206133   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:05.206192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:05.238403   73900 cri.go:89] found id: ""
	I0930 21:12:05.238431   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.238439   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:05.238447   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:05.238523   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:05.271261   73900 cri.go:89] found id: ""
	I0930 21:12:05.271290   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.271303   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:05.271310   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:05.271378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:05.307718   73900 cri.go:89] found id: ""
	I0930 21:12:05.307749   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.307760   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:05.307767   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:05.307832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:05.341336   73900 cri.go:89] found id: ""
	I0930 21:12:05.341379   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.341390   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:05.341398   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:05.341461   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:05.374998   73900 cri.go:89] found id: ""
	I0930 21:12:05.375024   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.375032   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:05.375037   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:05.375085   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:05.410133   73900 cri.go:89] found id: ""
	I0930 21:12:05.410163   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.410174   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:05.410182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:05.410248   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:05.446197   73900 cri.go:89] found id: ""
	I0930 21:12:05.446227   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.446238   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:05.446246   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:05.446305   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:05.480638   73900 cri.go:89] found id: ""
	I0930 21:12:05.480667   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.480683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:05.480691   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:05.480702   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:05.532473   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:05.532512   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:05.547068   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:05.547096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:05.621444   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:05.621472   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:05.621487   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:05.707712   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:05.707767   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:05.068363   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:07.069531   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:06.969695   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:06.969727   73375 pod_ready.go:82] duration metric: took 4m0.008001407s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:06.969736   73375 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:06.969743   73375 pod_ready.go:39] duration metric: took 4m4.053054405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:06.969757   73375 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:12:06.969781   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:06.969835   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:07.024708   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:07.024730   73375 cri.go:89] found id: ""
	I0930 21:12:07.024737   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:07.024805   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.029375   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:07.029439   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:07.063656   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:07.063684   73375 cri.go:89] found id: ""
	I0930 21:12:07.063695   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:07.063754   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.068071   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:07.068126   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:07.102636   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:07.102665   73375 cri.go:89] found id: ""
	I0930 21:12:07.102675   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:07.102733   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.106711   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:07.106791   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:07.142676   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:07.142698   73375 cri.go:89] found id: ""
	I0930 21:12:07.142708   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:07.142766   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.146979   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:07.147041   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:07.189192   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:07.189223   73375 cri.go:89] found id: ""
	I0930 21:12:07.189232   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:07.189283   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.193408   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:07.193484   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:07.230538   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:07.230562   73375 cri.go:89] found id: ""
	I0930 21:12:07.230571   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:07.230630   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.235482   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:07.235573   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:07.274180   73375 cri.go:89] found id: ""
	I0930 21:12:07.274215   73375 logs.go:276] 0 containers: []
	W0930 21:12:07.274226   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:07.274233   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:07.274312   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:07.312851   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:07.312876   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:07.312882   73375 cri.go:89] found id: ""
	I0930 21:12:07.312890   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:07.312947   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.317386   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.321912   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:07.321940   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:07.361674   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:07.361701   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:07.398555   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:07.398615   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:07.432511   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:07.432540   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:07.919639   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:07.919678   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:07.935038   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:07.935067   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:08.059404   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:08.059435   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:08.114569   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:08.114605   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:08.153409   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:08.153447   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.193155   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:08.193187   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:08.260774   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:08.260814   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:08.351488   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:08.351519   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:08.387971   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:08.388012   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:06.805971   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:08.807886   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:08.248038   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:08.261409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:08.261485   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:08.305564   73900 cri.go:89] found id: ""
	I0930 21:12:08.305591   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.305601   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:08.305610   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:08.305669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:08.347816   73900 cri.go:89] found id: ""
	I0930 21:12:08.347844   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.347852   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:08.347858   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:08.347927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:08.381662   73900 cri.go:89] found id: ""
	I0930 21:12:08.381695   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.381705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:08.381712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:08.381829   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:08.427366   73900 cri.go:89] found id: ""
	I0930 21:12:08.427396   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.427406   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:08.427413   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:08.427476   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:08.463419   73900 cri.go:89] found id: ""
	I0930 21:12:08.463443   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.463451   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:08.463457   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:08.463508   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:08.496999   73900 cri.go:89] found id: ""
	I0930 21:12:08.497023   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.497033   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:08.497040   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:08.497098   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:08.530410   73900 cri.go:89] found id: ""
	I0930 21:12:08.530434   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.530442   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:08.530447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:08.530495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:08.563191   73900 cri.go:89] found id: ""
	I0930 21:12:08.563224   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.563235   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:08.563244   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:08.563258   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:08.640305   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:08.640341   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.676404   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:08.676431   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:08.729676   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:08.729736   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:08.743282   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:08.743310   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:08.811334   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.311643   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:11.329153   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:11.329229   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:11.369804   73900 cri.go:89] found id: ""
	I0930 21:12:11.369829   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.369838   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:11.369843   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:11.369896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:11.408530   73900 cri.go:89] found id: ""
	I0930 21:12:11.408558   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.408569   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:11.408580   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:11.408663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.446123   73900 cri.go:89] found id: ""
	I0930 21:12:11.446147   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.446155   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:11.446160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:11.446206   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:11.484019   73900 cri.go:89] found id: ""
	I0930 21:12:11.484044   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.484052   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:11.484057   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:11.484118   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:11.521934   73900 cri.go:89] found id: ""
	I0930 21:12:11.521961   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.521971   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:11.521979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:11.522042   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:11.561253   73900 cri.go:89] found id: ""
	I0930 21:12:11.561283   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.561293   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:11.561299   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:11.561352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:11.602610   73900 cri.go:89] found id: ""
	I0930 21:12:11.602637   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.602648   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:11.602655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:11.602760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:11.637146   73900 cri.go:89] found id: ""
	I0930 21:12:11.637174   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.637185   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:11.637194   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:11.637208   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:11.707627   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.707651   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:11.707668   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:11.786047   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:11.786091   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:11.827128   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:11.827157   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:11.885504   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:11.885542   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:09.569584   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:11.570031   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:14.068184   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:10.950921   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:10.967834   73375 api_server.go:72] duration metric: took 4m15.348038807s to wait for apiserver process to appear ...
	I0930 21:12:10.967876   73375 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:12:10.967922   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:10.967990   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:11.006632   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:11.006667   73375 cri.go:89] found id: ""
	I0930 21:12:11.006677   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:11.006738   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.010931   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:11.010994   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:11.045855   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:11.045882   73375 cri.go:89] found id: ""
	I0930 21:12:11.045893   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:11.045953   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.050058   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:11.050134   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.090954   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:11.090980   73375 cri.go:89] found id: ""
	I0930 21:12:11.090990   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:11.091041   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.095073   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:11.095150   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:11.137413   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:11.137448   73375 cri.go:89] found id: ""
	I0930 21:12:11.137458   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:11.137516   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.141559   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:11.141638   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:11.176921   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:11.176952   73375 cri.go:89] found id: ""
	I0930 21:12:11.176961   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:11.177010   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.181095   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:11.181158   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:11.215117   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:11.215141   73375 cri.go:89] found id: ""
	I0930 21:12:11.215148   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:11.215195   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.218947   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:11.219003   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:11.253901   73375 cri.go:89] found id: ""
	I0930 21:12:11.253937   73375 logs.go:276] 0 containers: []
	W0930 21:12:11.253948   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:11.253955   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:11.254010   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:11.293408   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:11.293434   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:11.293440   73375 cri.go:89] found id: ""
	I0930 21:12:11.293448   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:11.293562   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.297829   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.302572   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:11.302596   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:11.378000   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:11.378037   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:11.415382   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:11.415414   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:11.453703   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:11.453729   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:11.517749   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:11.517780   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:11.556543   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:11.556576   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:12.023270   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:12.023310   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:12.071138   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:12.071170   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:12.086915   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:12.086944   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:12.200046   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:12.200077   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:12.241447   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:12.241475   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:12.296574   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:12.296607   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:12.341982   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:12.342009   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:14.877590   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:12:14.882913   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0930 21:12:14.884088   73375 api_server.go:141] control plane version: v1.31.1
	I0930 21:12:14.884106   73375 api_server.go:131] duration metric: took 3.916223308s to wait for apiserver health ...
	I0930 21:12:14.884113   73375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:12:14.884134   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:14.884185   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:14.926932   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:14.926952   73375 cri.go:89] found id: ""
	I0930 21:12:14.926960   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:14.927003   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:14.931044   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:14.931106   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:14.967622   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:14.967645   73375 cri.go:89] found id: ""
	I0930 21:12:14.967652   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:14.967698   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:14.972152   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:14.972221   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.307501   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:13.307687   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:14.400848   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:14.413794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:14.413882   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:14.449799   73900 cri.go:89] found id: ""
	I0930 21:12:14.449830   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.449841   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:14.449849   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:14.449902   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:14.486301   73900 cri.go:89] found id: ""
	I0930 21:12:14.486330   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.486357   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:14.486365   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:14.486427   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:14.520451   73900 cri.go:89] found id: ""
	I0930 21:12:14.520479   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.520487   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:14.520497   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:14.520558   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:14.554056   73900 cri.go:89] found id: ""
	I0930 21:12:14.554095   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.554107   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:14.554114   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:14.554178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:14.594054   73900 cri.go:89] found id: ""
	I0930 21:12:14.594080   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.594088   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:14.594094   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:14.594142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:14.630225   73900 cri.go:89] found id: ""
	I0930 21:12:14.630255   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.630278   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:14.630284   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:14.630335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:14.663006   73900 cri.go:89] found id: ""
	I0930 21:12:14.663043   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.663054   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:14.663061   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:14.663119   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:14.699815   73900 cri.go:89] found id: ""
	I0930 21:12:14.699845   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.699858   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:14.699870   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:14.699886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:14.751465   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:14.751509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:14.766401   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:14.766432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:14.832979   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:14.833002   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:14.833016   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:14.918011   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:14.918051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.458886   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:17.471833   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:17.471918   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:17.505109   73900 cri.go:89] found id: ""
	I0930 21:12:17.505135   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.505145   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:17.505151   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:17.505213   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:17.538091   73900 cri.go:89] found id: ""
	I0930 21:12:17.538118   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.538129   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:17.538136   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:17.538308   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:17.571668   73900 cri.go:89] found id: ""
	I0930 21:12:17.571694   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.571705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:17.571712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:17.571770   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:17.607391   73900 cri.go:89] found id: ""
	I0930 21:12:17.607431   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.607442   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:17.607452   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:17.607519   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:17.643271   73900 cri.go:89] found id: ""
	I0930 21:12:17.643297   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.643305   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:17.643313   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:17.643382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:17.676653   73900 cri.go:89] found id: ""
	I0930 21:12:17.676687   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.676698   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:17.676708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:17.676772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:17.709570   73900 cri.go:89] found id: ""
	I0930 21:12:17.709602   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.709610   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:17.709615   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:17.709671   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:17.747857   73900 cri.go:89] found id: ""
	I0930 21:12:17.747883   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.747891   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:17.747902   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:17.747915   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:15.010874   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:15.010898   73375 cri.go:89] found id: ""
	I0930 21:12:15.010905   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:15.010947   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.015490   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:15.015582   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:15.051182   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:15.051210   73375 cri.go:89] found id: ""
	I0930 21:12:15.051220   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:15.051291   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.055057   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:15.055107   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:15.093126   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:15.093150   73375 cri.go:89] found id: ""
	I0930 21:12:15.093159   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:15.093214   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.097138   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:15.097200   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:15.131676   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:15.131704   73375 cri.go:89] found id: ""
	I0930 21:12:15.131716   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:15.131773   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.135550   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:15.135620   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:15.170579   73375 cri.go:89] found id: ""
	I0930 21:12:15.170604   73375 logs.go:276] 0 containers: []
	W0930 21:12:15.170612   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:15.170618   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:15.170672   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:15.205190   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:15.205216   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:15.205222   73375 cri.go:89] found id: ""
	I0930 21:12:15.205231   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:15.205287   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.209426   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.212981   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:15.213002   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:15.281543   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:15.281582   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:15.325855   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:15.325895   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:15.367382   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:15.367429   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:15.441395   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:15.441432   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:15.482487   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:15.482518   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:15.520298   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:15.520335   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:15.572596   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:15.572626   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:15.618087   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:15.618120   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:15.634125   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:15.634151   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:15.744355   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:15.744390   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:15.799312   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:15.799345   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:15.838934   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:15.838969   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:18.759947   73375 system_pods.go:59] 8 kube-system pods found
	I0930 21:12:18.759976   73375 system_pods.go:61] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running
	I0930 21:12:18.759981   73375 system_pods.go:61] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running
	I0930 21:12:18.759985   73375 system_pods.go:61] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running
	I0930 21:12:18.759989   73375 system_pods.go:61] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running
	I0930 21:12:18.759992   73375 system_pods.go:61] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:12:18.759995   73375 system_pods.go:61] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running
	I0930 21:12:18.760001   73375 system_pods.go:61] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:18.760006   73375 system_pods.go:61] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:12:18.760016   73375 system_pods.go:74] duration metric: took 3.875896906s to wait for pod list to return data ...
	I0930 21:12:18.760024   73375 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:12:18.762755   73375 default_sa.go:45] found service account: "default"
	I0930 21:12:18.762777   73375 default_sa.go:55] duration metric: took 2.746721ms for default service account to be created ...
	I0930 21:12:18.762787   73375 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:12:18.769060   73375 system_pods.go:86] 8 kube-system pods found
	I0930 21:12:18.769086   73375 system_pods.go:89] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running
	I0930 21:12:18.769091   73375 system_pods.go:89] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running
	I0930 21:12:18.769095   73375 system_pods.go:89] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running
	I0930 21:12:18.769099   73375 system_pods.go:89] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running
	I0930 21:12:18.769104   73375 system_pods.go:89] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:12:18.769107   73375 system_pods.go:89] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running
	I0930 21:12:18.769113   73375 system_pods.go:89] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:18.769129   73375 system_pods.go:89] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:12:18.769136   73375 system_pods.go:126] duration metric: took 6.344583ms to wait for k8s-apps to be running ...
	I0930 21:12:18.769144   73375 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:12:18.769183   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:18.785488   73375 system_svc.go:56] duration metric: took 16.335135ms WaitForService to wait for kubelet
	I0930 21:12:18.785544   73375 kubeadm.go:582] duration metric: took 4m23.165751441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:12:18.785572   73375 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:12:18.789308   73375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:12:18.789340   73375 node_conditions.go:123] node cpu capacity is 2
	I0930 21:12:18.789356   73375 node_conditions.go:105] duration metric: took 3.778609ms to run NodePressure ...
	I0930 21:12:18.789370   73375 start.go:241] waiting for startup goroutines ...
	I0930 21:12:18.789379   73375 start.go:246] waiting for cluster config update ...
	I0930 21:12:18.789394   73375 start.go:255] writing updated cluster config ...
	I0930 21:12:18.789688   73375 ssh_runner.go:195] Run: rm -f paused
	I0930 21:12:18.837384   73375 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:12:18.839699   73375 out.go:177] * Done! kubectl is now configured to use "no-preload-997816" cluster and "default" namespace by default
	I0930 21:12:16.070108   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:18.569568   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:15.308534   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:15.308581   73707 pod_ready.go:82] duration metric: took 4m0.007893146s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:15.308595   73707 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:15.308605   73707 pod_ready.go:39] duration metric: took 4m2.806797001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:15.308621   73707 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:12:15.308657   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:15.308722   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:15.353287   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:15.353348   73707 cri.go:89] found id: ""
	I0930 21:12:15.353359   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:15.353416   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.357602   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:15.357696   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:15.399289   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:15.399325   73707 cri.go:89] found id: ""
	I0930 21:12:15.399332   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:15.399377   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.404757   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:15.404832   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:15.454396   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:15.454423   73707 cri.go:89] found id: ""
	I0930 21:12:15.454433   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:15.454493   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.458660   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:15.458743   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:15.493941   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:15.493971   73707 cri.go:89] found id: ""
	I0930 21:12:15.493982   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:15.494055   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.498541   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:15.498628   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:15.535354   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:15.535385   73707 cri.go:89] found id: ""
	I0930 21:12:15.535395   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:15.535454   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.540097   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:15.540168   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:15.583969   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:15.583996   73707 cri.go:89] found id: ""
	I0930 21:12:15.584003   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:15.584051   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.589193   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:15.589260   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:15.629413   73707 cri.go:89] found id: ""
	I0930 21:12:15.629440   73707 logs.go:276] 0 containers: []
	W0930 21:12:15.629449   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:15.629454   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:15.629506   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:15.670129   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:15.670160   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:15.670166   73707 cri.go:89] found id: ""
	I0930 21:12:15.670175   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:15.670237   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.674227   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.678252   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:15.678276   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:15.758280   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:15.758319   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:15.778191   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:15.778222   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:15.930379   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:15.930422   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:15.966732   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:15.966759   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:16.004304   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:16.004337   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:16.043705   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:16.043733   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:16.600173   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:16.600210   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:16.651837   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:16.651868   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:16.695122   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:16.695155   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:16.737622   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:16.737671   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:16.772913   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:16.772944   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:16.808196   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:16.808224   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.368150   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:19.385771   73707 api_server.go:72] duration metric: took 4m14.101602019s to wait for apiserver process to appear ...
	I0930 21:12:19.385798   73707 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:12:19.385831   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:19.385889   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:19.421325   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:19.421354   73707 cri.go:89] found id: ""
	I0930 21:12:19.421364   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:19.421426   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.428045   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:19.428107   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:19.466034   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:19.466054   73707 cri.go:89] found id: ""
	I0930 21:12:19.466061   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:19.466102   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.470155   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:19.470222   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:19.504774   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:19.504799   73707 cri.go:89] found id: ""
	I0930 21:12:19.504806   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:19.504869   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.509044   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:19.509134   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:19.544204   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:19.544228   73707 cri.go:89] found id: ""
	I0930 21:12:19.544235   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:19.544293   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.549103   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:19.549194   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:19.591381   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:19.591416   73707 cri.go:89] found id: ""
	I0930 21:12:19.591425   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:19.591472   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.595522   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:19.595621   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:19.634816   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.634841   73707 cri.go:89] found id: ""
	I0930 21:12:19.634850   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:19.634894   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.639391   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:19.639450   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:19.675056   73707 cri.go:89] found id: ""
	I0930 21:12:19.675084   73707 logs.go:276] 0 containers: []
	W0930 21:12:19.675095   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:19.675102   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:19.675159   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:19.708641   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:19.708666   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:19.708672   73707 cri.go:89] found id: ""
	I0930 21:12:19.708682   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:19.708738   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.712636   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.716653   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:19.716680   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:19.785159   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:19.785203   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:19.823462   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:19.823490   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:19.856776   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:19.856808   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:19.893919   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:19.893948   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:19.930932   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:19.930978   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.988120   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:19.988164   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:20.027576   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:20.027618   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:20.041523   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:20.041557   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:20.157598   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:20.157630   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:20.213353   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:20.213384   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:20.254502   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:20.254533   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:17.824584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:17.824623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.862613   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:17.862643   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:17.915954   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:17.915992   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:17.929824   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:17.929853   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:17.999697   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:20.500449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:20.514042   73900 kubeadm.go:597] duration metric: took 4m1.91059878s to restartPrimaryControlPlane
	W0930 21:12:20.514119   73900 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 21:12:20.514158   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:12:21.675376   73900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.161176988s)
	I0930 21:12:21.675465   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:21.689467   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:12:21.698504   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:12:21.708418   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:12:21.708437   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:12:21.708483   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:12:21.716960   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:12:21.717019   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:12:21.727610   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:12:21.736212   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:12:21.736275   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:12:21.745512   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.754299   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:12:21.754366   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.763724   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:12:21.772521   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:12:21.772595   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:12:21.782980   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:12:21.850463   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:12:21.850558   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:12:21.991521   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:12:21.991706   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:12:21.991849   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:12:22.174876   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:12:22.177037   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:12:22.177155   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:12:22.177253   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:12:22.177379   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:12:22.178789   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:12:22.178860   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:12:22.178907   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:12:22.178961   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:12:22.179017   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:12:22.179139   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:12:22.179247   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:12:22.179310   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:12:22.179398   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:12:22.253256   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:12:22.661237   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:12:22.947987   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:12:23.170995   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:12:23.184583   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:12:23.185770   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:12:23.185813   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:12:23.334769   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:12:21.069777   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:23.070328   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:20.696951   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:20.696989   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:23.236734   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:12:23.241215   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 200:
	ok
	I0930 21:12:23.242629   73707 api_server.go:141] control plane version: v1.31.1
	I0930 21:12:23.242651   73707 api_server.go:131] duration metric: took 3.856847284s to wait for apiserver health ...
	I0930 21:12:23.242660   73707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:12:23.242680   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:23.242724   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:23.279601   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:23.279626   73707 cri.go:89] found id: ""
	I0930 21:12:23.279633   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:23.279692   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.283900   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:23.283977   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:23.320360   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:23.320397   73707 cri.go:89] found id: ""
	I0930 21:12:23.320410   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:23.320472   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.324745   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:23.324825   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:23.368001   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:23.368024   73707 cri.go:89] found id: ""
	I0930 21:12:23.368034   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:23.368095   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.372001   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:23.372077   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:23.408203   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:23.408234   73707 cri.go:89] found id: ""
	I0930 21:12:23.408242   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:23.408299   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.412328   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:23.412397   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:23.462142   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:23.462173   73707 cri.go:89] found id: ""
	I0930 21:12:23.462183   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:23.462247   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.466257   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:23.466336   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:23.509075   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:23.509098   73707 cri.go:89] found id: ""
	I0930 21:12:23.509109   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:23.509169   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.513362   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:23.513441   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:23.553711   73707 cri.go:89] found id: ""
	I0930 21:12:23.553738   73707 logs.go:276] 0 containers: []
	W0930 21:12:23.553746   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:23.553752   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:23.553797   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:23.599596   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:23.599629   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:23.599635   73707 cri.go:89] found id: ""
	I0930 21:12:23.599644   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:23.599699   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.603589   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.607827   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:23.607855   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:23.621046   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:23.621069   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:23.664703   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:23.664735   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:23.700614   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:23.700644   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:23.738113   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:23.738143   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:23.775706   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:23.775733   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:23.840419   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:23.840454   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:23.876827   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:23.876860   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:23.943636   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:23.943675   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:24.052729   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:24.052763   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:24.106526   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:24.106556   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:24.146914   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:24.146941   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:24.527753   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:24.527804   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:27.077689   73707 system_pods.go:59] 8 kube-system pods found
	I0930 21:12:27.077721   73707 system_pods.go:61] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running
	I0930 21:12:27.077726   73707 system_pods.go:61] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running
	I0930 21:12:27.077731   73707 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running
	I0930 21:12:27.077739   73707 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running
	I0930 21:12:27.077744   73707 system_pods.go:61] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running
	I0930 21:12:27.077749   73707 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running
	I0930 21:12:27.077759   73707 system_pods.go:61] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:27.077766   73707 system_pods.go:61] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running
	I0930 21:12:27.077774   73707 system_pods.go:74] duration metric: took 3.835107861s to wait for pod list to return data ...
	I0930 21:12:27.077783   73707 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:12:27.082269   73707 default_sa.go:45] found service account: "default"
	I0930 21:12:27.082292   73707 default_sa.go:55] duration metric: took 4.502111ms for default service account to be created ...
	I0930 21:12:27.082299   73707 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:12:27.086738   73707 system_pods.go:86] 8 kube-system pods found
	I0930 21:12:27.086764   73707 system_pods.go:89] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running
	I0930 21:12:27.086770   73707 system_pods.go:89] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running
	I0930 21:12:27.086775   73707 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running
	I0930 21:12:27.086781   73707 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running
	I0930 21:12:27.086784   73707 system_pods.go:89] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running
	I0930 21:12:27.086788   73707 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running
	I0930 21:12:27.086796   73707 system_pods.go:89] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:27.086803   73707 system_pods.go:89] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running
	I0930 21:12:27.086811   73707 system_pods.go:126] duration metric: took 4.506701ms to wait for k8s-apps to be running ...
	I0930 21:12:27.086820   73707 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:12:27.086868   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:27.102286   73707 system_svc.go:56] duration metric: took 15.455734ms WaitForService to wait for kubelet
	I0930 21:12:27.102325   73707 kubeadm.go:582] duration metric: took 4m21.818162682s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:12:27.102346   73707 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:12:27.105332   73707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:12:27.105354   73707 node_conditions.go:123] node cpu capacity is 2
	I0930 21:12:27.105364   73707 node_conditions.go:105] duration metric: took 3.013328ms to run NodePressure ...
	I0930 21:12:27.105375   73707 start.go:241] waiting for startup goroutines ...
	I0930 21:12:27.105382   73707 start.go:246] waiting for cluster config update ...
	I0930 21:12:27.105393   73707 start.go:255] writing updated cluster config ...
	I0930 21:12:27.105669   73707 ssh_runner.go:195] Run: rm -f paused
	I0930 21:12:27.156804   73707 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:12:27.158887   73707 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-291511" cluster and "default" namespace by default
	I0930 21:12:23.336604   73900 out.go:235]   - Booting up control plane ...
	I0930 21:12:23.336747   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:12:23.345737   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:12:23.346784   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:12:23.347559   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:12:23.351009   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:12:25.568654   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:27.569042   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:29.570978   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:32.069065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:34.069347   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:36.568228   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:38.569351   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:40.569552   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:43.069456   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:45.569254   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:47.569647   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:49.569997   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:52.069284   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:54.069870   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:54.563572   73256 pod_ready.go:82] duration metric: took 4m0.000782781s for pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:54.563605   73256 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:54.563620   73256 pod_ready.go:39] duration metric: took 4m9.49309261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:54.563643   73256 kubeadm.go:597] duration metric: took 4m18.399318281s to restartPrimaryControlPlane
	W0930 21:12:54.563698   73256 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 21:12:54.563721   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:13:03.351822   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:13:03.352632   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:03.352833   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:08.353230   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:08.353429   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:20.634441   73256 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.070691776s)
	I0930 21:13:20.634529   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:13:20.650312   73256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:13:20.661782   73256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:13:20.671436   73256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:13:20.671463   73256 kubeadm.go:157] found existing configuration files:
	
	I0930 21:13:20.671504   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:13:20.681860   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:13:20.681934   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:13:20.692529   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:13:20.701507   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:13:20.701585   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:13:20.711211   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:13:20.721856   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:13:20.721928   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:13:20.733194   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:13:20.743887   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:13:20.743955   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:13:20.753546   73256 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:13:20.799739   73256 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 21:13:20.799812   73256 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:13:20.906464   73256 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:13:20.906569   73256 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:13:20.906647   73256 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 21:13:20.919451   73256 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:13:20.921440   73256 out.go:235]   - Generating certificates and keys ...
	I0930 21:13:20.921550   73256 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:13:20.921645   73256 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:13:20.921758   73256 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:13:20.921845   73256 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:13:20.921945   73256 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:13:20.922021   73256 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:13:20.922117   73256 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:13:20.922190   73256 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:13:20.922262   73256 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:13:20.922336   73256 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:13:20.922370   73256 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:13:20.922459   73256 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:13:21.079731   73256 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:13:21.214199   73256 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 21:13:21.344405   73256 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:13:21.605006   73256 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:13:21.718432   73256 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:13:21.718967   73256 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:13:21.723434   73256 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:13:18.354150   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:18.354468   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:21.725304   73256 out.go:235]   - Booting up control plane ...
	I0930 21:13:21.725435   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:13:21.725526   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:13:21.725637   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:13:21.743582   73256 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:13:21.749533   73256 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:13:21.749605   73256 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:13:21.873716   73256 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 21:13:21.873867   73256 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 21:13:22.375977   73256 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.402537ms
	I0930 21:13:22.376098   73256 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 21:13:27.379510   73256 kubeadm.go:310] [api-check] The API server is healthy after 5.001265494s
	I0930 21:13:27.392047   73256 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 21:13:27.409550   73256 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 21:13:27.447693   73256 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 21:13:27.447896   73256 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-256103 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 21:13:27.462338   73256 kubeadm.go:310] [bootstrap-token] Using token: k5ffj3.6sqmy7prwrlhrg7s
	I0930 21:13:27.463967   73256 out.go:235]   - Configuring RBAC rules ...
	I0930 21:13:27.464076   73256 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 21:13:27.472107   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 21:13:27.481172   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 21:13:27.485288   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 21:13:27.492469   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 21:13:27.496822   73256 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 21:13:27.789372   73256 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 21:13:28.210679   73256 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 21:13:28.784869   73256 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 21:13:28.785859   73256 kubeadm.go:310] 
	I0930 21:13:28.785954   73256 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 21:13:28.785967   73256 kubeadm.go:310] 
	I0930 21:13:28.786045   73256 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 21:13:28.786077   73256 kubeadm.go:310] 
	I0930 21:13:28.786121   73256 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 21:13:28.786219   73256 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 21:13:28.786286   73256 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 21:13:28.786304   73256 kubeadm.go:310] 
	I0930 21:13:28.786395   73256 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 21:13:28.786405   73256 kubeadm.go:310] 
	I0930 21:13:28.786464   73256 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 21:13:28.786474   73256 kubeadm.go:310] 
	I0930 21:13:28.786546   73256 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 21:13:28.786658   73256 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 21:13:28.786754   73256 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 21:13:28.786763   73256 kubeadm.go:310] 
	I0930 21:13:28.786870   73256 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 21:13:28.786991   73256 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 21:13:28.787000   73256 kubeadm.go:310] 
	I0930 21:13:28.787122   73256 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k5ffj3.6sqmy7prwrlhrg7s \
	I0930 21:13:28.787240   73256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 21:13:28.787274   73256 kubeadm.go:310] 	--control-plane 
	I0930 21:13:28.787290   73256 kubeadm.go:310] 
	I0930 21:13:28.787415   73256 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 21:13:28.787425   73256 kubeadm.go:310] 
	I0930 21:13:28.787547   73256 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k5ffj3.6sqmy7prwrlhrg7s \
	I0930 21:13:28.787713   73256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 21:13:28.788805   73256 kubeadm.go:310] W0930 21:13:20.776526    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:13:28.789058   73256 kubeadm.go:310] W0930 21:13:20.777323    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:13:28.789158   73256 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:13:28.789178   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:13:28.789187   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:13:28.791049   73256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:13:28.792381   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:13:28.802872   73256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:13:28.819952   73256 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:13:28.820054   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:28.820070   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-256103 minikube.k8s.io/updated_at=2024_09_30T21_13_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=embed-certs-256103 minikube.k8s.io/primary=true
	I0930 21:13:28.859770   73256 ops.go:34] apiserver oom_adj: -16
	I0930 21:13:29.026274   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:29.526992   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:30.026700   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:30.526962   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:31.027165   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:31.526632   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:32.027019   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:32.526522   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:33.026739   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:33.116028   73256 kubeadm.go:1113] duration metric: took 4.296036786s to wait for elevateKubeSystemPrivileges
	I0930 21:13:33.116067   73256 kubeadm.go:394] duration metric: took 4m57.005787187s to StartCluster
	I0930 21:13:33.116088   73256 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:13:33.116175   73256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:13:33.117855   73256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:13:33.118142   73256 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:13:33.118263   73256 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:13:33.118420   73256 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-256103"
	I0930 21:13:33.118373   73256 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:13:33.118446   73256 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-256103"
	I0930 21:13:33.118442   73256 addons.go:69] Setting default-storageclass=true in profile "embed-certs-256103"
	W0930 21:13:33.118453   73256 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:13:33.118464   73256 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-256103"
	I0930 21:13:33.118482   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.118515   73256 addons.go:69] Setting metrics-server=true in profile "embed-certs-256103"
	I0930 21:13:33.118554   73256 addons.go:234] Setting addon metrics-server=true in "embed-certs-256103"
	W0930 21:13:33.118564   73256 addons.go:243] addon metrics-server should already be in state true
	I0930 21:13:33.118594   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.118807   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118840   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.118880   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118926   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.118941   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118965   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.120042   73256 out.go:177] * Verifying Kubernetes components...
	I0930 21:13:33.121706   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:13:33.136554   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0930 21:13:33.137096   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.137304   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0930 21:13:33.137664   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.137696   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.137789   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.138013   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.138176   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.138317   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.138336   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.139163   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37389
	I0930 21:13:33.139176   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.139733   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.139903   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.139955   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.140284   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.140311   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.140780   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.141336   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.141375   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.141814   73256 addons.go:234] Setting addon default-storageclass=true in "embed-certs-256103"
	W0930 21:13:33.141832   73256 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:13:33.141857   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.142143   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.142177   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.161937   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0930 21:13:33.162096   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33657
	I0930 21:13:33.162249   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42531
	I0930 21:13:33.162491   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.162536   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.162837   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.163017   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163028   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163030   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163045   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163254   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163265   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163362   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.163417   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.163864   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.163899   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.164101   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.164154   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.164356   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.166460   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.166673   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.168464   73256 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:13:33.168631   73256 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:13:33.169822   73256 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:13:33.169840   73256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:13:33.169857   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.169937   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:13:33.169947   73256 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:13:33.169963   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.174613   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.174653   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175236   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.175265   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175372   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.175405   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175667   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.176048   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.176051   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.176299   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.176299   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.176476   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.176684   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.176685   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.180520   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I0930 21:13:33.180968   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.181564   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.181588   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.181938   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.182136   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.183803   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.184001   73256 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:13:33.184017   73256 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:13:33.184035   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.186565   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.186964   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.186996   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.187311   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.187481   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.187797   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.187937   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.337289   73256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:13:33.360186   73256 node_ready.go:35] waiting up to 6m0s for node "embed-certs-256103" to be "Ready" ...
	I0930 21:13:33.372799   73256 node_ready.go:49] node "embed-certs-256103" has status "Ready":"True"
	I0930 21:13:33.372828   73256 node_ready.go:38] duration metric: took 12.601736ms for node "embed-certs-256103" to be "Ready" ...
	I0930 21:13:33.372837   73256 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:13:33.379694   73256 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:33.462144   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:13:33.500072   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:13:33.500102   73256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:13:33.524789   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:13:33.548931   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:13:33.548955   73256 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:13:33.604655   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:13:33.604682   73256 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:13:33.648687   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:13:34.533493   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.008666954s)
	I0930 21:13:34.533555   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.533566   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.533856   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.533870   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.533884   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.533892   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.533900   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.534108   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.534126   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.534149   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.535651   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.073475648s)
	I0930 21:13:34.535695   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.535706   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.535926   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.536001   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.536014   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.536030   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.535981   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.537450   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.537470   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.537480   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.564363   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.564394   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.564715   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.564739   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968266   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.319532564s)
	I0930 21:13:34.968330   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.968350   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.968642   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.968665   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968674   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.968673   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.968681   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.968944   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.968969   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968973   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.968979   73256 addons.go:475] Verifying addon metrics-server=true in "embed-certs-256103"
	I0930 21:13:34.970656   73256 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0930 21:13:34.971966   73256 addons.go:510] duration metric: took 1.853709741s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0930 21:13:35.387687   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:37.388374   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:39.886425   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:41.885713   73256 pod_ready.go:93] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.885737   73256 pod_ready.go:82] duration metric: took 8.506004979s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.885746   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.891032   73256 pod_ready.go:93] pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.891052   73256 pod_ready.go:82] duration metric: took 5.300379ms for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.891061   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.895332   73256 pod_ready.go:93] pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.895349   73256 pod_ready.go:82] duration metric: took 4.282199ms for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.895357   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-glbsg" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.899518   73256 pod_ready.go:93] pod "kube-proxy-glbsg" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.899556   73256 pod_ready.go:82] duration metric: took 4.191815ms for pod "kube-proxy-glbsg" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.899567   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.904184   73256 pod_ready.go:93] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.904203   73256 pod_ready.go:82] duration metric: took 4.628533ms for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.904209   73256 pod_ready.go:39] duration metric: took 8.531361398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:13:41.904221   73256 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:13:41.904262   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:13:41.919570   73256 api_server.go:72] duration metric: took 8.801387692s to wait for apiserver process to appear ...
	I0930 21:13:41.919591   73256 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:13:41.919607   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:13:41.923810   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I0930 21:13:41.924633   73256 api_server.go:141] control plane version: v1.31.1
	I0930 21:13:41.924651   73256 api_server.go:131] duration metric: took 5.054857ms to wait for apiserver health ...
	I0930 21:13:41.924659   73256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:13:42.086431   73256 system_pods.go:59] 9 kube-system pods found
	I0930 21:13:42.086468   73256 system_pods.go:61] "coredns-7c65d6cfc9-gt5tt" [165faaf0-866c-4097-9bdb-ed58fe8d7395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.086480   73256 system_pods.go:61] "coredns-7c65d6cfc9-sgsbn" [c97fdb50-c6a0-4ef8-8c01-ea45ed18b72a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.086488   73256 system_pods.go:61] "etcd-embed-certs-256103" [6aac0706-7dbd-4655-b261-68877299d81a] Running
	I0930 21:13:42.086494   73256 system_pods.go:61] "kube-apiserver-embed-certs-256103" [6c8e3157-ec97-4a85-8947-ca7541c19b1c] Running
	I0930 21:13:42.086500   73256 system_pods.go:61] "kube-controller-manager-embed-certs-256103" [1e3f76d1-d343-4127-aad9-8a5a8e589a43] Running
	I0930 21:13:42.086505   73256 system_pods.go:61] "kube-proxy-glbsg" [f68e378f-ce0f-4603-bd8e-93334f04f7a7] Running
	I0930 21:13:42.086510   73256 system_pods.go:61] "kube-scheduler-embed-certs-256103" [29f55c6f-9603-4cd2-a798-0ff2362b7607] Running
	I0930 21:13:42.086518   73256 system_pods.go:61] "metrics-server-6867b74b74-5mhkh" [470424ec-bb66-4d62-904d-0d4ad93fa5bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:13:42.086525   73256 system_pods.go:61] "storage-provisioner" [a07a5a12-7420-4b57-b79d-982f4bb48232] Running
	I0930 21:13:42.086538   73256 system_pods.go:74] duration metric: took 161.870121ms to wait for pod list to return data ...
	I0930 21:13:42.086559   73256 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:13:42.284282   73256 default_sa.go:45] found service account: "default"
	I0930 21:13:42.284307   73256 default_sa.go:55] duration metric: took 197.73827ms for default service account to be created ...
	I0930 21:13:42.284316   73256 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:13:42.486445   73256 system_pods.go:86] 9 kube-system pods found
	I0930 21:13:42.486478   73256 system_pods.go:89] "coredns-7c65d6cfc9-gt5tt" [165faaf0-866c-4097-9bdb-ed58fe8d7395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.486489   73256 system_pods.go:89] "coredns-7c65d6cfc9-sgsbn" [c97fdb50-c6a0-4ef8-8c01-ea45ed18b72a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.486497   73256 system_pods.go:89] "etcd-embed-certs-256103" [6aac0706-7dbd-4655-b261-68877299d81a] Running
	I0930 21:13:42.486503   73256 system_pods.go:89] "kube-apiserver-embed-certs-256103" [6c8e3157-ec97-4a85-8947-ca7541c19b1c] Running
	I0930 21:13:42.486509   73256 system_pods.go:89] "kube-controller-manager-embed-certs-256103" [1e3f76d1-d343-4127-aad9-8a5a8e589a43] Running
	I0930 21:13:42.486513   73256 system_pods.go:89] "kube-proxy-glbsg" [f68e378f-ce0f-4603-bd8e-93334f04f7a7] Running
	I0930 21:13:42.486518   73256 system_pods.go:89] "kube-scheduler-embed-certs-256103" [29f55c6f-9603-4cd2-a798-0ff2362b7607] Running
	I0930 21:13:42.486526   73256 system_pods.go:89] "metrics-server-6867b74b74-5mhkh" [470424ec-bb66-4d62-904d-0d4ad93fa5bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:13:42.486533   73256 system_pods.go:89] "storage-provisioner" [a07a5a12-7420-4b57-b79d-982f4bb48232] Running
	I0930 21:13:42.486542   73256 system_pods.go:126] duration metric: took 202.220435ms to wait for k8s-apps to be running ...
	I0930 21:13:42.486552   73256 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:13:42.486601   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:13:42.501286   73256 system_svc.go:56] duration metric: took 14.699273ms WaitForService to wait for kubelet
	I0930 21:13:42.501315   73256 kubeadm.go:582] duration metric: took 9.38313627s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:13:42.501332   73256 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:13:42.685282   73256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:13:42.685314   73256 node_conditions.go:123] node cpu capacity is 2
	I0930 21:13:42.685326   73256 node_conditions.go:105] duration metric: took 183.989963ms to run NodePressure ...
	I0930 21:13:42.685346   73256 start.go:241] waiting for startup goroutines ...
	I0930 21:13:42.685356   73256 start.go:246] waiting for cluster config update ...
	I0930 21:13:42.685371   73256 start.go:255] writing updated cluster config ...
	I0930 21:13:42.685664   73256 ssh_runner.go:195] Run: rm -f paused
	I0930 21:13:42.734778   73256 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:13:42.736658   73256 out.go:177] * Done! kubectl is now configured to use "embed-certs-256103" cluster and "default" namespace by default
	I0930 21:13:38.355123   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:38.355330   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357098   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:14:18.357396   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357419   73900 kubeadm.go:310] 
	I0930 21:14:18.357473   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:14:18.357541   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:14:18.357554   73900 kubeadm.go:310] 
	I0930 21:14:18.357609   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:14:18.357659   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:14:18.357801   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:14:18.357817   73900 kubeadm.go:310] 
	I0930 21:14:18.357964   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:14:18.357996   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:14:18.358028   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:14:18.358039   73900 kubeadm.go:310] 
	I0930 21:14:18.358174   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:14:18.358318   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:14:18.358331   73900 kubeadm.go:310] 
	I0930 21:14:18.358510   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:14:18.358646   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:14:18.358764   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:14:18.358866   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:14:18.358882   73900 kubeadm.go:310] 
	I0930 21:14:18.359454   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:14:18.359595   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:14:18.359681   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0930 21:14:18.359797   73900 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0930 21:14:18.359841   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:14:18.820244   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:14:18.834938   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:14:18.844779   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:14:18.844803   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:14:18.844856   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:14:18.853738   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:14:18.853811   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:14:18.863366   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:14:18.872108   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:14:18.872164   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:14:18.881818   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.890916   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:14:18.890969   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.900075   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:14:18.908449   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:14:18.908520   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:14:18.917163   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:14:18.983181   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:14:18.983233   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:14:19.121356   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:14:19.121545   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:14:19.121674   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:14:19.306639   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:14:19.309593   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:14:19.309683   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:14:19.309748   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:14:19.309870   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:14:19.309957   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:14:19.310040   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:14:19.310119   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:14:19.310209   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:14:19.310292   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:14:19.310404   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:14:19.310511   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:14:19.310567   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:14:19.310654   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:14:19.453872   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:14:19.621232   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:14:19.797694   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:14:19.886897   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:14:19.909016   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:14:19.910536   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:14:19.910617   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:14:20.052878   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:14:20.054739   73900 out.go:235]   - Booting up control plane ...
	I0930 21:14:20.054881   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:14:20.068419   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:14:20.068512   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:14:20.068697   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:14:20.072015   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:15:00.073988   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:15:00.074795   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:00.075068   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:05.075810   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:05.076061   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:15.076695   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:15.076928   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:35.077652   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:35.077862   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.076816   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:16:15.077063   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.077082   73900 kubeadm.go:310] 
	I0930 21:16:15.077136   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:16:15.077188   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:16:15.077198   73900 kubeadm.go:310] 
	I0930 21:16:15.077246   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:16:15.077298   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:16:15.077425   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:16:15.077442   73900 kubeadm.go:310] 
	I0930 21:16:15.077605   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:16:15.077651   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:16:15.077710   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:16:15.077718   73900 kubeadm.go:310] 
	I0930 21:16:15.077851   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:16:15.077997   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:16:15.078013   73900 kubeadm.go:310] 
	I0930 21:16:15.078143   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:16:15.078229   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:16:15.078309   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:16:15.078419   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:16:15.078431   73900 kubeadm.go:310] 
	I0930 21:16:15.079235   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:16:15.079365   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:16:15.079442   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0930 21:16:15.079572   73900 kubeadm.go:394] duration metric: took 7m56.529269567s to StartCluster
	I0930 21:16:15.079639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:16:15.079713   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:16:15.122057   73900 cri.go:89] found id: ""
	I0930 21:16:15.122086   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.122098   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:16:15.122105   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:16:15.122166   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:16:15.156244   73900 cri.go:89] found id: ""
	I0930 21:16:15.156278   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.156289   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:16:15.156297   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:16:15.156357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:16:15.188952   73900 cri.go:89] found id: ""
	I0930 21:16:15.188977   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.188989   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:16:15.188996   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:16:15.189058   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:16:15.219400   73900 cri.go:89] found id: ""
	I0930 21:16:15.219427   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.219435   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:16:15.219441   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:16:15.219501   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:16:15.252049   73900 cri.go:89] found id: ""
	I0930 21:16:15.252078   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.252086   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:16:15.252093   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:16:15.252150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:16:15.286560   73900 cri.go:89] found id: ""
	I0930 21:16:15.286594   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.286605   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:16:15.286614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:16:15.286679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:16:15.319140   73900 cri.go:89] found id: ""
	I0930 21:16:15.319178   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.319187   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:16:15.319192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:16:15.319245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:16:15.351299   73900 cri.go:89] found id: ""
	I0930 21:16:15.351322   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.351330   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:16:15.351339   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:16:15.351350   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:16:15.402837   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:16:15.402882   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:16:15.417111   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:16:15.417140   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:16:15.492593   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:16:15.492614   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:16:15.492627   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:16:15.621646   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:16:15.621681   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0930 21:16:15.660480   73900 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0930 21:16:15.660528   73900 out.go:270] * 
	W0930 21:16:15.660580   73900 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.660595   73900 out.go:270] * 
	W0930 21:16:15.661387   73900 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 21:16:15.665510   73900 out.go:201] 
	W0930 21:16:15.667332   73900 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.667373   73900 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0930 21:16:15.667390   73900 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0930 21:16:15.668812   73900 out.go:201] 
	
	
	==> CRI-O <==
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.298315120Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731289298292403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a9cce79-7245-4f46-a5b5-a50e9c31c20f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.298923063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=455969a3-5756-4971-ad09-aa19f951a79d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.298992912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=455969a3-5756-4971-ad09-aa19f951a79d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.299188199Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730514306129258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ee5915b6ae16b96cd663ee230ec2be38c102dc2fa2dc69df5ab339dc8491be,PodSandboxId:222548d08e8ca6dedc5cefa4101645feb196c7513bf31036f3b2ad6fa8a480ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730494782013233,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34406fdf-7b58-4457-ae9f-712885f7dd29,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49,PodSandboxId:e1c9eb6432e4d71ab5da7fbf52fbc0ae5e06c3c3e846e61d3afdf121e8dce90c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730491188667347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdjjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5672cd58-4d3f-409e-b279-f4027fe09aea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8,PodSandboxId:42211a70b47f66293db0d93fab4943057f14074d5ef5295ac87fc17e7920c604,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730483519285586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kwp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e5295f-3
aaa-4222-a61a-942354f79f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730483505090273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70
-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711,PodSandboxId:f79dc667d99fdb19116453c544fd2237d1d54bbcaab691521d0e060e788947f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730478833366464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fece16652c16bcf190a3661de3d4efe0,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4,PodSandboxId:d49bb2fcbc5f1ed5d4230afdcfb01762dfbd7f34d75b5250e1fe6ef46d571e06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730478747062794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e89d6165ff01d08a4db0c2b1d86676,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140,PodSandboxId:e10f5499f6f3cc25491e1828871ddde819bb03b833cc49805b280430b8f24e8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730478773216547,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a4ca8c9198bea8670b6f35051fdd299,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8,PodSandboxId:e2b3fcdb417f9947d8b24abe8415a54815bbb4ec75b831eb72a302c1eef787b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730478768576247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180e5819899b337683f2e15f3bad06
9a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=455969a3-5756-4971-ad09-aa19f951a79d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.336573683Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=227478ad-2162-40f7-bc7d-9e34e1331dc0 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.336829446Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=227478ad-2162-40f7-bc7d-9e34e1331dc0 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.337884678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5695fe19-b843-498c-af97-c0a27d6023ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.338275070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731289338251407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5695fe19-b843-498c-af97-c0a27d6023ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.339252121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04f183d2-4ab4-4c46-a07f-d22b647fc830 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.339313472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04f183d2-4ab4-4c46-a07f-d22b647fc830 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.339525166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730514306129258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ee5915b6ae16b96cd663ee230ec2be38c102dc2fa2dc69df5ab339dc8491be,PodSandboxId:222548d08e8ca6dedc5cefa4101645feb196c7513bf31036f3b2ad6fa8a480ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730494782013233,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34406fdf-7b58-4457-ae9f-712885f7dd29,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49,PodSandboxId:e1c9eb6432e4d71ab5da7fbf52fbc0ae5e06c3c3e846e61d3afdf121e8dce90c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730491188667347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdjjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5672cd58-4d3f-409e-b279-f4027fe09aea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8,PodSandboxId:42211a70b47f66293db0d93fab4943057f14074d5ef5295ac87fc17e7920c604,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730483519285586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kwp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e5295f-3
aaa-4222-a61a-942354f79f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730483505090273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70
-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711,PodSandboxId:f79dc667d99fdb19116453c544fd2237d1d54bbcaab691521d0e060e788947f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730478833366464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fece16652c16bcf190a3661de3d4efe0,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4,PodSandboxId:d49bb2fcbc5f1ed5d4230afdcfb01762dfbd7f34d75b5250e1fe6ef46d571e06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730478747062794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e89d6165ff01d08a4db0c2b1d86676,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140,PodSandboxId:e10f5499f6f3cc25491e1828871ddde819bb03b833cc49805b280430b8f24e8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730478773216547,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a4ca8c9198bea8670b6f35051fdd299,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8,PodSandboxId:e2b3fcdb417f9947d8b24abe8415a54815bbb4ec75b831eb72a302c1eef787b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730478768576247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180e5819899b337683f2e15f3bad06
9a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04f183d2-4ab4-4c46-a07f-d22b647fc830 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.381647147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1266d1a4-47a4-446e-a69d-9912f1b79c69 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.381736069Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1266d1a4-47a4-446e-a69d-9912f1b79c69 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.382915041Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=982302b9-a452-4365-b02d-3b86104df002 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.383356736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731289383330668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=982302b9-a452-4365-b02d-3b86104df002 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.384045671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04c60364-f133-48f5-b838-4c0761e6e80f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.384099551Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04c60364-f133-48f5-b838-4c0761e6e80f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.384301956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730514306129258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ee5915b6ae16b96cd663ee230ec2be38c102dc2fa2dc69df5ab339dc8491be,PodSandboxId:222548d08e8ca6dedc5cefa4101645feb196c7513bf31036f3b2ad6fa8a480ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730494782013233,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34406fdf-7b58-4457-ae9f-712885f7dd29,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49,PodSandboxId:e1c9eb6432e4d71ab5da7fbf52fbc0ae5e06c3c3e846e61d3afdf121e8dce90c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730491188667347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdjjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5672cd58-4d3f-409e-b279-f4027fe09aea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8,PodSandboxId:42211a70b47f66293db0d93fab4943057f14074d5ef5295ac87fc17e7920c604,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730483519285586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kwp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e5295f-3
aaa-4222-a61a-942354f79f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730483505090273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70
-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711,PodSandboxId:f79dc667d99fdb19116453c544fd2237d1d54bbcaab691521d0e060e788947f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730478833366464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fece16652c16bcf190a3661de3d4efe0,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4,PodSandboxId:d49bb2fcbc5f1ed5d4230afdcfb01762dfbd7f34d75b5250e1fe6ef46d571e06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730478747062794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e89d6165ff01d08a4db0c2b1d86676,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140,PodSandboxId:e10f5499f6f3cc25491e1828871ddde819bb03b833cc49805b280430b8f24e8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730478773216547,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a4ca8c9198bea8670b6f35051fdd299,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8,PodSandboxId:e2b3fcdb417f9947d8b24abe8415a54815bbb4ec75b831eb72a302c1eef787b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730478768576247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180e5819899b337683f2e15f3bad06
9a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04c60364-f133-48f5-b838-4c0761e6e80f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.417840248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9dd62e7-56cc-4058-a15a-1f821eeef122 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.417916071Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9dd62e7-56cc-4058-a15a-1f821eeef122 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.419114799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d4aee31-7d29-453b-bed1-2ace9dd569aa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.419547087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731289419524490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d4aee31-7d29-453b-bed1-2ace9dd569aa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.420170193Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb977e9b-6f08-42bc-96fe-29bc211e59b3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.420472001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb977e9b-6f08-42bc-96fe-29bc211e59b3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:21:29 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:21:29.421924758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730514306129258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ee5915b6ae16b96cd663ee230ec2be38c102dc2fa2dc69df5ab339dc8491be,PodSandboxId:222548d08e8ca6dedc5cefa4101645feb196c7513bf31036f3b2ad6fa8a480ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730494782013233,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34406fdf-7b58-4457-ae9f-712885f7dd29,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49,PodSandboxId:e1c9eb6432e4d71ab5da7fbf52fbc0ae5e06c3c3e846e61d3afdf121e8dce90c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730491188667347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdjjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5672cd58-4d3f-409e-b279-f4027fe09aea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8,PodSandboxId:42211a70b47f66293db0d93fab4943057f14074d5ef5295ac87fc17e7920c604,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730483519285586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kwp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e5295f-3
aaa-4222-a61a-942354f79f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730483505090273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70
-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711,PodSandboxId:f79dc667d99fdb19116453c544fd2237d1d54bbcaab691521d0e060e788947f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730478833366464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fece16652c16bcf190a3661de3d4efe0,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4,PodSandboxId:d49bb2fcbc5f1ed5d4230afdcfb01762dfbd7f34d75b5250e1fe6ef46d571e06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730478747062794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e89d6165ff01d08a4db0c2b1d86676,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140,PodSandboxId:e10f5499f6f3cc25491e1828871ddde819bb03b833cc49805b280430b8f24e8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730478773216547,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a4ca8c9198bea8670b6f35051fdd299,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8,PodSandboxId:e2b3fcdb417f9947d8b24abe8415a54815bbb4ec75b831eb72a302c1eef787b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730478768576247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180e5819899b337683f2e15f3bad06
9a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb977e9b-6f08-42bc-96fe-29bc211e59b3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3f81706851d1c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   98b3fb072cb5d       storage-provisioner
	a1ee5915b6ae1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   222548d08e8ca       busybox
	ec71e052062dc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   e1c9eb6432e4d       coredns-7c65d6cfc9-hdjjq
	5e4ebb7ceb7e6       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   42211a70b47f6       kube-proxy-kwp22
	1822eaafdd4d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   98b3fb072cb5d       storage-provisioner
	7e53b1ee3c16b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   f79dc667d99fd       etcd-default-k8s-diff-port-291511
	f197afcf3b28b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   e10f5499f6f3c       kube-apiserver-default-k8s-diff-port-291511
	d1119782e608c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   e2b3fcdb417f9       kube-controller-manager-default-k8s-diff-port-291511
	0a84556ba1073       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   d49bb2fcbc5f1       kube-scheduler-default-k8s-diff-port-291511
	
	
	==> coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50516 - 34933 "HINFO IN 5976675863271297143.6033242033858797482. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013316297s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-291511
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-291511
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=default-k8s-diff-port-291511
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T21_00_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-291511
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 21:21:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 21:18:44 +0000   Mon, 30 Sep 2024 20:59:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 21:18:44 +0000   Mon, 30 Sep 2024 20:59:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 21:18:44 +0000   Mon, 30 Sep 2024 20:59:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 21:18:44 +0000   Mon, 30 Sep 2024 21:08:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.2
	  Hostname:    default-k8s-diff-port-291511
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d5c8e195f4341288565205b7d02a6d2
	  System UUID:                1d5c8e19-5f43-4128-8565-205b7d02a6d2
	  Boot ID:                    e07d2f31-3d59-4b81-bb95-03dc31c61a54
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-hdjjq                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-291511                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-291511             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-291511    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-kwp22                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-291511             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-txb2j                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-291511 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-291511 event: Registered Node default-k8s-diff-port-291511 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-291511 event: Registered Node default-k8s-diff-port-291511 in Controller
	
	
	==> dmesg <==
	[Sep30 21:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051482] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039217] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.841694] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.954501] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.579250] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.147945] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.064668] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068590] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.205113] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.150720] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.316837] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +4.363768] systemd-fstab-generator[801]: Ignoring "noauto" option for root device
	[  +0.056710] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.283674] systemd-fstab-generator[921]: Ignoring "noauto" option for root device
	[Sep30 21:08] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.891051] systemd-fstab-generator[1541]: Ignoring "noauto" option for root device
	[  +3.789527] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.736069] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] <==
	{"level":"info","ts":"2024-09-30T21:08:00.802943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e41ba90b9a1b23a8 became candidate at term 3"}
	{"level":"info","ts":"2024-09-30T21:08:00.802951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e41ba90b9a1b23a8 received MsgVoteResp from e41ba90b9a1b23a8 at term 3"}
	{"level":"info","ts":"2024-09-30T21:08:00.802963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e41ba90b9a1b23a8 became leader at term 3"}
	{"level":"info","ts":"2024-09-30T21:08:00.802976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e41ba90b9a1b23a8 elected leader e41ba90b9a1b23a8 at term 3"}
	{"level":"info","ts":"2024-09-30T21:08:00.806585Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T21:08:00.806929Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T21:08:00.807319Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T21:08:00.807390Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T21:08:00.806566Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e41ba90b9a1b23a8","local-member-attributes":"{Name:default-k8s-diff-port-291511 ClientURLs:[https://192.168.50.2:2379]}","request-path":"/0/members/e41ba90b9a1b23a8/attributes","cluster-id":"7ea00afa4db9962c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T21:08:00.808461Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T21:08:00.808501Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T21:08:00.809961Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.2:2379"}
	{"level":"info","ts":"2024-09-30T21:08:00.810293Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-30T21:08:17.190462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.474069ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2569464411440859047 > lease_revoke:<id:23a89244bb8c9d2f>","response":"size:28"}
	{"level":"warn","ts":"2024-09-30T21:08:17.339735Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.150784ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2569464411440859048 > lease_revoke:<id:23a89244bb8c9da6>","response":"size:28"}
	{"level":"info","ts":"2024-09-30T21:08:17.339821Z","caller":"traceutil/trace.go:171","msg":"trace[909013578] linearizableReadLoop","detail":"{readStateIndex:683; appliedIndex:681; }","duration":"448.031327ms","start":"2024-09-30T21:08:16.891766Z","end":"2024-09-30T21:08:17.339797Z","steps":["trace[909013578] 'read index received'  (duration: 33.372625ms)","trace[909013578] 'applied index is now lower than readState.Index'  (duration: 414.657779ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T21:08:17.339998Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"448.223216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-291511\" ","response":"range_response_count:1 size:5535"}
	{"level":"info","ts":"2024-09-30T21:08:17.340106Z","caller":"traceutil/trace.go:171","msg":"trace[2012609444] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-291511; range_end:; response_count:1; response_revision:640; }","duration":"448.281864ms","start":"2024-09-30T21:08:16.891745Z","end":"2024-09-30T21:08:17.340027Z","steps":["trace[2012609444] 'agreement among raft nodes before linearized reading'  (duration: 448.148639ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:08:17.340147Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T21:08:16.891703Z","time spent":"448.43399ms","remote":"127.0.0.1:43798","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5558,"request content":"key:\"/registry/minions/default-k8s-diff-port-291511\" "}
	{"level":"warn","ts":"2024-09-30T21:08:17.340303Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.240405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T21:08:17.340334Z","caller":"traceutil/trace.go:171","msg":"trace[835079800] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:640; }","duration":"279.273139ms","start":"2024-09-30T21:08:17.061056Z","end":"2024-09-30T21:08:17.340329Z","steps":["trace[835079800] 'agreement among raft nodes before linearized reading'  (duration: 279.228434ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:08:39.057544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.491727ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2569464411440859234 > lease_revoke:<id:23a89244c2f0c7e0>","response":"size:28"}
	{"level":"info","ts":"2024-09-30T21:18:00.843699Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":883}
	{"level":"info","ts":"2024-09-30T21:18:00.855489Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":883,"took":"11.403889ms","hash":2973093467,"current-db-size-bytes":2879488,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2879488,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-09-30T21:18:00.855550Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2973093467,"revision":883,"compact-revision":-1}
	
	
	==> kernel <==
	 21:21:29 up 13 min,  0 users,  load average: 0.30, 0.10, 0.05
	Linux default-k8s-diff-port-291511 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] <==
	W0930 21:18:03.294704       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:18:03.294823       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0930 21:18:03.295684       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:18:03.296789       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 21:19:03.296506       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:19:03.296557       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0930 21:19:03.297671       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 21:19:03.297704       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:19:03.297857       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0930 21:19:03.299084       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 21:21:03.299008       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:21:03.299125       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0930 21:21:03.300177       1 handler_proxy.go:99] no RequestInfo found in the context
	I0930 21:21:03.300229       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0930 21:21:03.300381       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0930 21:21:03.302449       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] <==
	E0930 21:16:05.884707       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:16:06.331732       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:16:35.890828       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:16:36.338914       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:17:05.896952       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:17:06.346623       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:17:35.904161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:17:36.355045       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:18:05.911057       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:18:06.362811       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:18:35.917972       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:18:36.370430       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0930 21:18:44.433099       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-291511"
	E0930 21:19:05.923800       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:19:06.378694       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0930 21:19:12.120054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="217.348µs"
	I0930 21:19:24.121230       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="203.599µs"
	E0930 21:19:35.931760       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:19:36.385376       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:20:05.938417       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:20:06.393368       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:20:35.944299       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:20:36.401411       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:21:05.950898       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:21:06.408469       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 21:08:03.791140       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 21:08:03.808068       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.2"]
	E0930 21:08:03.808182       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 21:08:03.855007       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 21:08:03.855054       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 21:08:03.855080       1 server_linux.go:169] "Using iptables Proxier"
	I0930 21:08:03.864236       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 21:08:03.864483       1 server.go:483] "Version info" version="v1.31.1"
	I0930 21:08:03.864509       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 21:08:03.866358       1 config.go:199] "Starting service config controller"
	I0930 21:08:03.866385       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 21:08:03.866408       1 config.go:105] "Starting endpoint slice config controller"
	I0930 21:08:03.866412       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 21:08:03.866828       1 config.go:328] "Starting node config controller"
	I0930 21:08:03.866835       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 21:08:03.967048       1 shared_informer.go:320] Caches are synced for node config
	I0930 21:08:03.967185       1 shared_informer.go:320] Caches are synced for service config
	I0930 21:08:03.967196       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] <==
	I0930 21:07:59.719109       1 serving.go:386] Generated self-signed cert in-memory
	W0930 21:08:02.203676       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 21:08:02.203821       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 21:08:02.204392       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 21:08:02.204493       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 21:08:02.245712       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 21:08:02.245852       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 21:08:02.247843       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 21:08:02.248122       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 21:08:02.248204       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 21:08:02.248303       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 21:08:02.349128       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 21:20:18 default-k8s-diff-port-291511 kubelet[928]: E0930 21:20:18.261248     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731218260463210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:26 default-k8s-diff-port-291511 kubelet[928]: E0930 21:20:26.101771     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-txb2j" podUID="6f0ec8d2-5528-4f70-807c-42cbabae23bb"
	Sep 30 21:20:28 default-k8s-diff-port-291511 kubelet[928]: E0930 21:20:28.266342     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731228265949367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:28 default-k8s-diff-port-291511 kubelet[928]: E0930 21:20:28.266383     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731228265949367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:37 default-k8s-diff-port-291511 kubelet[928]: E0930 21:20:37.101409     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-txb2j" podUID="6f0ec8d2-5528-4f70-807c-42cbabae23bb"
	Sep 30 21:20:38 default-k8s-diff-port-291511 kubelet[928]: E0930 21:20:38.267788     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731238267497795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:38 default-k8s-diff-port-291511 kubelet[928]: E0930 21:20:38.267824     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731238267497795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:48 default-k8s-diff-port-291511 kubelet[928]: E0930 21:20:48.102903     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-txb2j" podUID="6f0ec8d2-5528-4f70-807c-42cbabae23bb"
	Sep 30 21:20:48 default-k8s-diff-port-291511 kubelet[928]: E0930 21:20:48.269698     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731248269395852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:48 default-k8s-diff-port-291511 kubelet[928]: E0930 21:20:48.269764     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731248269395852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:58 default-k8s-diff-port-291511 kubelet[928]: E0930 21:20:58.116519     928 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 21:20:58 default-k8s-diff-port-291511 kubelet[928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 21:20:58 default-k8s-diff-port-291511 kubelet[928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 21:20:58 default-k8s-diff-port-291511 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 21:20:58 default-k8s-diff-port-291511 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 21:20:58 default-k8s-diff-port-291511 kubelet[928]: E0930 21:20:58.271202     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731258270965031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:20:58 default-k8s-diff-port-291511 kubelet[928]: E0930 21:20:58.271230     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731258270965031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:03 default-k8s-diff-port-291511 kubelet[928]: E0930 21:21:03.102395     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-txb2j" podUID="6f0ec8d2-5528-4f70-807c-42cbabae23bb"
	Sep 30 21:21:08 default-k8s-diff-port-291511 kubelet[928]: E0930 21:21:08.273242     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731268272956244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:08 default-k8s-diff-port-291511 kubelet[928]: E0930 21:21:08.273502     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731268272956244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:18 default-k8s-diff-port-291511 kubelet[928]: E0930 21:21:18.103576     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-txb2j" podUID="6f0ec8d2-5528-4f70-807c-42cbabae23bb"
	Sep 30 21:21:18 default-k8s-diff-port-291511 kubelet[928]: E0930 21:21:18.278682     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731278277964862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:18 default-k8s-diff-port-291511 kubelet[928]: E0930 21:21:18.278717     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731278277964862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:28 default-k8s-diff-port-291511 kubelet[928]: E0930 21:21:28.284203     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731288283390849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:28 default-k8s-diff-port-291511 kubelet[928]: E0930 21:21:28.284296     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731288283390849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] <==
	I0930 21:08:03.609448       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0930 21:08:33.612813       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] <==
	I0930 21:08:34.404003       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 21:08:34.412916       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 21:08:34.412979       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 21:08:51.810522       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 21:08:51.810832       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-291511_7484d4d4-6fb4-4e7f-b333-81f608b5f818!
	I0930 21:08:51.811532       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"205929d7-019f-4a3b-b8c3-1a0ccd9e6e0d", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-291511_7484d4d4-6fb4-4e7f-b333-81f608b5f818 became leader
	I0930 21:08:51.911877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-291511_7484d4d4-6fb4-4e7f-b333-81f608b5f818!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-291511 -n default-k8s-diff-port-291511
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-291511 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-txb2j
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-291511 describe pod metrics-server-6867b74b74-txb2j
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-291511 describe pod metrics-server-6867b74b74-txb2j: exit status 1 (65.10492ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-txb2j" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-291511 describe pod metrics-server-6867b74b74-txb2j: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0930 21:14:31.998131   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:15:52.286746   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:15:55.061418   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:15:55.311144   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256103 -n embed-certs-256103
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-30 21:22:43.261328136 +0000 UTC m=+6283.966084265
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256103 -n embed-certs-256103
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-256103 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-256103 logs -n 25: (2.160992326s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo find                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo crio                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-207733                                      | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-741890 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | disable-driver-mounts-741890                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 21:00 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-256103            | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-997816             | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-291511  | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-621406        | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256103                 | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-997816                  | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-291511       | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:12 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-621406             | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 21:03:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 21:03:42.750102   73900 out.go:345] Setting OutFile to fd 1 ...
	I0930 21:03:42.750367   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750377   73900 out.go:358] Setting ErrFile to fd 2...
	I0930 21:03:42.750383   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750578   73900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 21:03:42.751109   73900 out.go:352] Setting JSON to false
	I0930 21:03:42.752040   73900 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6366,"bootTime":1727723857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 21:03:42.752140   73900 start.go:139] virtualization: kvm guest
	I0930 21:03:42.754146   73900 out.go:177] * [old-k8s-version-621406] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 21:03:42.755446   73900 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 21:03:42.755456   73900 notify.go:220] Checking for updates...
	I0930 21:03:42.758261   73900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 21:03:42.759566   73900 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:03:42.760907   73900 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 21:03:42.762342   73900 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 21:03:42.763561   73900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 21:03:42.765356   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:03:42.765773   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.765822   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.780605   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45071
	I0930 21:03:42.781022   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.781550   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.781583   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.781912   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.782160   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.784603   73900 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 21:03:42.785760   73900 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 21:03:42.786115   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.786156   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.800937   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0930 21:03:42.801409   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.801882   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.801905   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.802216   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.802397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.838423   73900 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 21:03:42.839832   73900 start.go:297] selected driver: kvm2
	I0930 21:03:42.839847   73900 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.839953   73900 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 21:03:42.840605   73900 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.840667   73900 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 21:03:42.856119   73900 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 21:03:42.856550   73900 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:03:42.856580   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:03:42.856630   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:03:42.856665   73900 start.go:340] cluster config:
	{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.856778   73900 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.858732   73900 out.go:177] * Starting "old-k8s-version-621406" primary control-plane node in "old-k8s-version-621406" cluster
	I0930 21:03:42.859876   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:03:42.859912   73900 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 21:03:42.859929   73900 cache.go:56] Caching tarball of preloaded images
	I0930 21:03:42.860020   73900 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 21:03:42.860031   73900 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0930 21:03:42.860153   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:03:42.860340   73900 start.go:360] acquireMachinesLock for old-k8s-version-621406: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:03:44.619810   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:47.691872   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:53.771838   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:56.843848   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:02.923822   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:05.995871   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:12.075814   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:15.147854   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:21.227790   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:24.299842   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:30.379801   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:33.451787   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:39.531808   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:42.603838   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:48.683904   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:51.755939   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:57.835834   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:00.907789   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:06.987875   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:10.059892   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:16.139832   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:19.211908   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:25.291812   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:28.363915   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:34.443827   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:37.515928   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:43.595824   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:46.667934   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:52.747851   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:55.819883   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:01.899789   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:04.971946   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:11.051812   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:14.123833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:20.203805   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:23.275875   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:29.355806   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:32.427931   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:38.507837   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:41.579909   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:47.659786   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:50.731827   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:56.811833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:59.883878   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:05.963833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:09.035828   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:12.040058   73375 start.go:364] duration metric: took 4m26.951572628s to acquireMachinesLock for "no-preload-997816"
	I0930 21:07:12.040115   73375 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:12.040126   73375 fix.go:54] fixHost starting: 
	I0930 21:07:12.040448   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:12.040485   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:12.057054   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0930 21:07:12.057624   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:12.058143   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:12.058173   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:12.058523   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:12.058739   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:12.058873   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:12.060479   73375 fix.go:112] recreateIfNeeded on no-preload-997816: state=Stopped err=<nil>
	I0930 21:07:12.060499   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	W0930 21:07:12.060640   73375 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:12.062653   73375 out.go:177] * Restarting existing kvm2 VM for "no-preload-997816" ...
	I0930 21:07:12.037683   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:12.037732   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:07:12.038031   73256 buildroot.go:166] provisioning hostname "embed-certs-256103"
	I0930 21:07:12.038055   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:07:12.038234   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:07:12.039910   73256 machine.go:96] duration metric: took 4m37.42208497s to provisionDockerMachine
	I0930 21:07:12.039954   73256 fix.go:56] duration metric: took 4m37.444804798s for fixHost
	I0930 21:07:12.039962   73256 start.go:83] releasing machines lock for "embed-certs-256103", held for 4m37.444833727s
	W0930 21:07:12.039989   73256 start.go:714] error starting host: provision: host is not running
	W0930 21:07:12.040104   73256 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0930 21:07:12.040116   73256 start.go:729] Will try again in 5 seconds ...
	I0930 21:07:12.063941   73375 main.go:141] libmachine: (no-preload-997816) Calling .Start
	I0930 21:07:12.064167   73375 main.go:141] libmachine: (no-preload-997816) Ensuring networks are active...
	I0930 21:07:12.065080   73375 main.go:141] libmachine: (no-preload-997816) Ensuring network default is active
	I0930 21:07:12.065489   73375 main.go:141] libmachine: (no-preload-997816) Ensuring network mk-no-preload-997816 is active
	I0930 21:07:12.065993   73375 main.go:141] libmachine: (no-preload-997816) Getting domain xml...
	I0930 21:07:12.066923   73375 main.go:141] libmachine: (no-preload-997816) Creating domain...
	I0930 21:07:13.297091   73375 main.go:141] libmachine: (no-preload-997816) Waiting to get IP...
	I0930 21:07:13.297965   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.298386   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.298473   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.298370   74631 retry.go:31] will retry after 312.032565ms: waiting for machine to come up
	I0930 21:07:13.612088   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.612583   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.612607   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.612519   74631 retry.go:31] will retry after 292.985742ms: waiting for machine to come up
	I0930 21:07:13.907355   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.907794   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.907817   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.907754   74631 retry.go:31] will retry after 451.618632ms: waiting for machine to come up
	I0930 21:07:14.361536   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:14.361990   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:14.362054   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:14.361947   74631 retry.go:31] will retry after 599.246635ms: waiting for machine to come up
	I0930 21:07:14.962861   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:14.963341   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:14.963369   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:14.963294   74631 retry.go:31] will retry after 748.726096ms: waiting for machine to come up
	I0930 21:07:17.040758   73256 start.go:360] acquireMachinesLock for embed-certs-256103: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:07:15.713258   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:15.713576   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:15.713601   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:15.713525   74631 retry.go:31] will retry after 907.199669ms: waiting for machine to come up
	I0930 21:07:16.622784   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:16.623275   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:16.623307   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:16.623211   74631 retry.go:31] will retry after 744.978665ms: waiting for machine to come up
	I0930 21:07:17.369735   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:17.370206   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:17.370231   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:17.370154   74631 retry.go:31] will retry after 1.238609703s: waiting for machine to come up
	I0930 21:07:18.610618   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:18.610967   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:18.610989   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:18.610928   74631 retry.go:31] will retry after 1.354775356s: waiting for machine to come up
	I0930 21:07:19.967473   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:19.967892   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:19.967916   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:19.967851   74631 retry.go:31] will retry after 2.26449082s: waiting for machine to come up
	I0930 21:07:22.234066   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:22.234514   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:22.234536   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:22.234474   74631 retry.go:31] will retry after 2.728158374s: waiting for machine to come up
	I0930 21:07:24.966375   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:24.966759   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:24.966782   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:24.966724   74631 retry.go:31] will retry after 3.119117729s: waiting for machine to come up
	I0930 21:07:29.336238   73707 start.go:364] duration metric: took 3m58.92874513s to acquireMachinesLock for "default-k8s-diff-port-291511"
	I0930 21:07:29.336327   73707 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:29.336347   73707 fix.go:54] fixHost starting: 
	I0930 21:07:29.336726   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:29.336779   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:29.354404   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0930 21:07:29.354848   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:29.355331   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:07:29.355352   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:29.355882   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:29.356081   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:29.356249   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:07:29.358109   73707 fix.go:112] recreateIfNeeded on default-k8s-diff-port-291511: state=Stopped err=<nil>
	I0930 21:07:29.358155   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	W0930 21:07:29.358336   73707 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:29.361072   73707 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-291511" ...
	I0930 21:07:28.087153   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.087604   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has current primary IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.087636   73375 main.go:141] libmachine: (no-preload-997816) Found IP for machine: 192.168.61.93
	I0930 21:07:28.087644   73375 main.go:141] libmachine: (no-preload-997816) Reserving static IP address...
	I0930 21:07:28.088047   73375 main.go:141] libmachine: (no-preload-997816) Reserved static IP address: 192.168.61.93
	I0930 21:07:28.088068   73375 main.go:141] libmachine: (no-preload-997816) Waiting for SSH to be available...
	I0930 21:07:28.088090   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "no-preload-997816", mac: "52:54:00:cb:3d:73", ip: "192.168.61.93"} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.088158   73375 main.go:141] libmachine: (no-preload-997816) DBG | skip adding static IP to network mk-no-preload-997816 - found existing host DHCP lease matching {name: "no-preload-997816", mac: "52:54:00:cb:3d:73", ip: "192.168.61.93"}
	I0930 21:07:28.088181   73375 main.go:141] libmachine: (no-preload-997816) DBG | Getting to WaitForSSH function...
	I0930 21:07:28.090195   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.090522   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.090547   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.090722   73375 main.go:141] libmachine: (no-preload-997816) DBG | Using SSH client type: external
	I0930 21:07:28.090739   73375 main.go:141] libmachine: (no-preload-997816) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa (-rw-------)
	I0930 21:07:28.090767   73375 main.go:141] libmachine: (no-preload-997816) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.93 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:07:28.090787   73375 main.go:141] libmachine: (no-preload-997816) DBG | About to run SSH command:
	I0930 21:07:28.090801   73375 main.go:141] libmachine: (no-preload-997816) DBG | exit 0
	I0930 21:07:28.211669   73375 main.go:141] libmachine: (no-preload-997816) DBG | SSH cmd err, output: <nil>: 
	I0930 21:07:28.212073   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetConfigRaw
	I0930 21:07:28.212714   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:28.215442   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.215934   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.215951   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.216186   73375 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/config.json ...
	I0930 21:07:28.216370   73375 machine.go:93] provisionDockerMachine start ...
	I0930 21:07:28.216386   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:28.216575   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.218963   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.219423   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.219455   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.219604   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.219770   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.219948   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.220057   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.220252   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.220441   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.220452   73375 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:07:28.315814   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:07:28.315853   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.316131   73375 buildroot.go:166] provisioning hostname "no-preload-997816"
	I0930 21:07:28.316161   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.316372   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.319253   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.319506   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.319548   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.319711   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.319903   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.320057   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.320182   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.320383   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.320592   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.320606   73375 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-997816 && echo "no-preload-997816" | sudo tee /etc/hostname
	I0930 21:07:28.433652   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-997816
	
	I0930 21:07:28.433686   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.436989   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.437350   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.437389   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.437611   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.437784   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.437957   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.438075   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.438267   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.438487   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.438512   73375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-997816' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-997816/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-997816' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:07:28.544056   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:28.544088   73375 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:07:28.544112   73375 buildroot.go:174] setting up certificates
	I0930 21:07:28.544122   73375 provision.go:84] configureAuth start
	I0930 21:07:28.544135   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.544418   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:28.546960   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.547363   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.547384   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.547570   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.549918   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.550325   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.550353   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.550535   73375 provision.go:143] copyHostCerts
	I0930 21:07:28.550612   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:07:28.550627   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:07:28.550711   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:07:28.550804   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:07:28.550812   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:07:28.550837   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:07:28.550893   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:07:28.550900   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:07:28.550920   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:07:28.550967   73375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.no-preload-997816 san=[127.0.0.1 192.168.61.93 localhost minikube no-preload-997816]
	I0930 21:07:28.744306   73375 provision.go:177] copyRemoteCerts
	I0930 21:07:28.744364   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:07:28.744386   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.747024   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.747368   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.747401   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.747615   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.747813   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.747973   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.748133   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:28.825616   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0930 21:07:28.849513   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:07:28.872666   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:07:28.895673   73375 provision.go:87] duration metric: took 351.536833ms to configureAuth
	I0930 21:07:28.895708   73375 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:07:28.895896   73375 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:28.895975   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.898667   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.899067   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.899098   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.899324   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.899567   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.899703   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.899829   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.899946   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.900120   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.900134   73375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:07:29.113855   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:07:29.113877   73375 machine.go:96] duration metric: took 897.495238ms to provisionDockerMachine
	I0930 21:07:29.113887   73375 start.go:293] postStartSetup for "no-preload-997816" (driver="kvm2")
	I0930 21:07:29.113897   73375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:07:29.113921   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.114220   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:07:29.114254   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.117274   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.117619   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.117663   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.117816   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.118010   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.118159   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.118289   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.197962   73375 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:07:29.202135   73375 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:07:29.202166   73375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:07:29.202237   73375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:07:29.202321   73375 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:07:29.202406   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:07:29.211693   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:29.234503   73375 start.go:296] duration metric: took 120.601484ms for postStartSetup
	I0930 21:07:29.234582   73375 fix.go:56] duration metric: took 17.194433455s for fixHost
	I0930 21:07:29.234610   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.237134   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.237544   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.237574   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.237728   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.237912   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.238085   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.238199   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.238348   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:29.238506   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:29.238515   73375 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:07:29.336092   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730449.310327649
	
	I0930 21:07:29.336114   73375 fix.go:216] guest clock: 1727730449.310327649
	I0930 21:07:29.336123   73375 fix.go:229] Guest: 2024-09-30 21:07:29.310327649 +0000 UTC Remote: 2024-09-30 21:07:29.234588814 +0000 UTC m=+284.288095935 (delta=75.738835ms)
	I0930 21:07:29.336147   73375 fix.go:200] guest clock delta is within tolerance: 75.738835ms
	I0930 21:07:29.336153   73375 start.go:83] releasing machines lock for "no-preload-997816", held for 17.296055752s
	I0930 21:07:29.336194   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.336478   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:29.339488   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.339864   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.339909   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.340070   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340525   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340697   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340800   73375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:07:29.340836   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.340930   73375 ssh_runner.go:195] Run: cat /version.json
	I0930 21:07:29.340955   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.343579   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.343941   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.343976   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344010   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344228   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.344405   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.344441   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.344471   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344543   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.344616   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.344689   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.344784   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.344966   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.345105   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.420949   73375 ssh_runner.go:195] Run: systemctl --version
	I0930 21:07:29.465854   73375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:07:29.616360   73375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:07:29.624522   73375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:07:29.624604   73375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:07:29.642176   73375 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:07:29.642202   73375 start.go:495] detecting cgroup driver to use...
	I0930 21:07:29.642279   73375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:07:29.657878   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:07:29.674555   73375 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:07:29.674614   73375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:07:29.690953   73375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:07:29.705425   73375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:07:29.814602   73375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:07:29.957009   73375 docker.go:233] disabling docker service ...
	I0930 21:07:29.957091   73375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:07:29.971419   73375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:07:29.362775   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Start
	I0930 21:07:29.363023   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring networks are active...
	I0930 21:07:29.364071   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring network default is active
	I0930 21:07:29.364456   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring network mk-default-k8s-diff-port-291511 is active
	I0930 21:07:29.364940   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Getting domain xml...
	I0930 21:07:29.365759   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Creating domain...
	I0930 21:07:29.987509   73375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:07:30.112952   73375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:07:30.239945   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:07:30.253298   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:07:30.271687   73375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:07:30.271768   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.282267   73375 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:07:30.282339   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.292776   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.303893   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.315002   73375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:07:30.326410   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.336951   73375 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.356016   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.367847   73375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:07:30.378650   73375 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:07:30.378703   73375 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:07:30.391768   73375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:07:30.401887   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:30.534771   73375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:07:30.622017   73375 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:07:30.622087   73375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:07:30.627221   73375 start.go:563] Will wait 60s for crictl version
	I0930 21:07:30.627294   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:30.633071   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:07:30.675743   73375 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:07:30.675830   73375 ssh_runner.go:195] Run: crio --version
	I0930 21:07:30.703470   73375 ssh_runner.go:195] Run: crio --version
	I0930 21:07:30.732424   73375 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:07:30.733714   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:30.737016   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:30.737380   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:30.737421   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:30.737690   73375 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0930 21:07:30.741714   73375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:30.754767   73375 kubeadm.go:883] updating cluster {Name:no-preload-997816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:07:30.754892   73375 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:07:30.754941   73375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:30.794489   73375 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:07:30.794516   73375 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 21:07:30.794605   73375 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:30.794624   73375 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:30.794653   73375 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:30.794694   73375 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:30.794733   73375 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:30.794691   73375 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:30.794822   73375 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:30.794836   73375 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0930 21:07:30.796508   73375 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:30.796521   73375 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:30.796538   73375 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:30.796543   73375 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:30.796610   73375 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:30.796616   73375 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:30.796611   73375 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0930 21:07:30.796665   73375 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.018683   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0930 21:07:31.028097   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.117252   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.131998   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.136871   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.140418   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.170883   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.171059   73375 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0930 21:07:31.171098   73375 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.171142   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.172908   73375 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0930 21:07:31.172951   73375 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.172994   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.242489   73375 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0930 21:07:31.242541   73375 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.242609   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.246685   73375 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0930 21:07:31.246731   73375 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.246758   73375 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0930 21:07:31.246778   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.246794   73375 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.246837   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.270923   73375 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0930 21:07:31.270971   73375 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.271024   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.271030   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.271100   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.271109   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.271207   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.271269   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.387993   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.388011   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.388044   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.388091   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.388150   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.388230   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.523098   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.523156   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.523300   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.523344   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.523467   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.623696   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0930 21:07:31.623759   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.623778   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0930 21:07:31.623794   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:31.623869   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:31.632927   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0930 21:07:31.633014   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.633117   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.633206   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0930 21:07:31.633269   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:31.648925   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0930 21:07:31.648945   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.648983   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.676886   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0930 21:07:31.676925   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0930 21:07:31.709210   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0930 21:07:31.709287   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0930 21:07:31.709331   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:31.709394   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:31.709330   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0930 21:07:32.112418   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:33.634620   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.985614953s)
	I0930 21:07:33.634656   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0930 21:07:33.634702   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.925342294s)
	I0930 21:07:33.634716   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:33.634731   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0930 21:07:33.634771   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.925359685s)
	I0930 21:07:33.634779   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:33.634782   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0930 21:07:33.634853   73375 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.522405881s)
	I0930 21:07:33.634891   73375 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0930 21:07:33.634913   73375 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:33.634961   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:30.643828   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting to get IP...
	I0930 21:07:30.644936   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.645382   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.645484   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:30.645381   74769 retry.go:31] will retry after 216.832119ms: waiting for machine to come up
	I0930 21:07:30.863953   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.864583   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.864614   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:30.864518   74769 retry.go:31] will retry after 280.448443ms: waiting for machine to come up
	I0930 21:07:31.147184   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.147792   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.147826   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.147728   74769 retry.go:31] will retry after 345.517763ms: waiting for machine to come up
	I0930 21:07:31.495391   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.495819   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.495841   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.495786   74769 retry.go:31] will retry after 457.679924ms: waiting for machine to come up
	I0930 21:07:31.955479   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.955943   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.955974   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.955897   74769 retry.go:31] will retry after 562.95605ms: waiting for machine to come up
	I0930 21:07:32.520890   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:32.521339   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:32.521368   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:32.521285   74769 retry.go:31] will retry after 743.560182ms: waiting for machine to come up
	I0930 21:07:33.266407   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:33.266914   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:33.266941   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:33.266853   74769 retry.go:31] will retry after 947.444427ms: waiting for machine to come up
	I0930 21:07:34.216195   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:34.216705   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:34.216731   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:34.216659   74769 retry.go:31] will retry after 1.186059526s: waiting for machine to come up
	I0930 21:07:35.714633   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.079826486s)
	I0930 21:07:35.714667   73375 ssh_runner.go:235] Completed: which crictl: (2.079690884s)
	I0930 21:07:35.714721   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:35.714670   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0930 21:07:35.714786   73375 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:35.714821   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:35.753242   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:39.088354   73375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.335055656s)
	I0930 21:07:39.088395   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.373547177s)
	I0930 21:07:39.088422   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0930 21:07:39.088458   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:39.088536   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:39.088459   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:35.404773   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:35.405334   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:35.405359   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:35.405225   74769 retry.go:31] will retry after 1.575803783s: waiting for machine to come up
	I0930 21:07:36.983196   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:36.983730   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:36.983759   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:36.983677   74769 retry.go:31] will retry after 2.020561586s: waiting for machine to come up
	I0930 21:07:39.006915   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:39.007304   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:39.007334   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:39.007269   74769 retry.go:31] will retry after 2.801421878s: waiting for machine to come up
	I0930 21:07:41.074012   73375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.985398095s)
	I0930 21:07:41.074061   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0930 21:07:41.074154   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.985588774s)
	I0930 21:07:41.074183   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0930 21:07:41.074202   73375 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:41.074244   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:41.074166   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:42.972016   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.897745882s)
	I0930 21:07:42.972055   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0930 21:07:42.972083   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.8977868s)
	I0930 21:07:42.972110   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0930 21:07:42.972086   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:42.972155   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:44.835190   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.863005436s)
	I0930 21:07:44.835237   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0930 21:07:44.835263   73375 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:44.835334   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:41.810719   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:41.811099   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:41.811117   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:41.811050   74769 retry.go:31] will retry after 2.703489988s: waiting for machine to come up
	I0930 21:07:44.515949   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:44.516329   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:44.516356   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:44.516276   74769 retry.go:31] will retry after 4.001267434s: waiting for machine to come up
	I0930 21:07:49.889033   73900 start.go:364] duration metric: took 4m7.028659379s to acquireMachinesLock for "old-k8s-version-621406"
	I0930 21:07:49.889104   73900 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:49.889111   73900 fix.go:54] fixHost starting: 
	I0930 21:07:49.889542   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:49.889600   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:49.906767   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0930 21:07:49.907283   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:49.907856   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:07:49.907889   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:49.908203   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:49.908397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:07:49.908542   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetState
	I0930 21:07:49.910270   73900 fix.go:112] recreateIfNeeded on old-k8s-version-621406: state=Stopped err=<nil>
	I0930 21:07:49.910306   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	W0930 21:07:49.910441   73900 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:49.912646   73900 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-621406" ...
	I0930 21:07:45.483728   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0930 21:07:45.483778   73375 cache_images.go:123] Successfully loaded all cached images
	I0930 21:07:45.483785   73375 cache_images.go:92] duration metric: took 14.689240439s to LoadCachedImages
	I0930 21:07:45.483799   73375 kubeadm.go:934] updating node { 192.168.61.93 8443 v1.31.1 crio true true} ...
	I0930 21:07:45.483898   73375 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-997816 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:07:45.483977   73375 ssh_runner.go:195] Run: crio config
	I0930 21:07:45.529537   73375 cni.go:84] Creating CNI manager for ""
	I0930 21:07:45.529558   73375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:45.529567   73375 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:07:45.529591   73375 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.93 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-997816 NodeName:no-preload-997816 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:07:45.529713   73375 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-997816"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.93
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.93"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:07:45.529775   73375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:07:45.540251   73375 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:07:45.540323   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:07:45.549622   73375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0930 21:07:45.565425   73375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:07:45.580646   73375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0930 21:07:45.596216   73375 ssh_runner.go:195] Run: grep 192.168.61.93	control-plane.minikube.internal$ /etc/hosts
	I0930 21:07:45.604940   73375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.93	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:45.620809   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:45.751327   73375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:45.768664   73375 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816 for IP: 192.168.61.93
	I0930 21:07:45.768687   73375 certs.go:194] generating shared ca certs ...
	I0930 21:07:45.768702   73375 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:45.768896   73375 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:07:45.768953   73375 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:07:45.768967   73375 certs.go:256] generating profile certs ...
	I0930 21:07:45.769081   73375 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/client.key
	I0930 21:07:45.769188   73375 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.key.c7192a03
	I0930 21:07:45.769251   73375 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.key
	I0930 21:07:45.769422   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:07:45.769468   73375 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:07:45.769483   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:07:45.769527   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:07:45.769569   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:07:45.769603   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:07:45.769672   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:45.770679   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:07:45.809391   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:07:45.837624   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:07:45.878472   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:07:45.909163   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 21:07:45.950655   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 21:07:45.974391   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:07:45.997258   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 21:07:46.019976   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:07:46.042828   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:07:46.066625   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:07:46.089639   73375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:07:46.106202   73375 ssh_runner.go:195] Run: openssl version
	I0930 21:07:46.111810   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:07:46.122379   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.126659   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.126699   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.132363   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:07:46.143074   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:07:46.154060   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.158542   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.158602   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.164210   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:07:46.175160   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:07:46.186326   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.190782   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.190856   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.196356   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:07:46.206957   73375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:07:46.211650   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:07:46.217398   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:07:46.223566   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:07:46.230204   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:07:46.236404   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:07:46.242282   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:07:46.248591   73375 kubeadm.go:392] StartCluster: {Name:no-preload-997816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:07:46.248686   73375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:07:46.248731   73375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:46.292355   73375 cri.go:89] found id: ""
	I0930 21:07:46.292435   73375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:07:46.303578   73375 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:07:46.303598   73375 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:07:46.303668   73375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:07:46.314544   73375 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:07:46.315643   73375 kubeconfig.go:125] found "no-preload-997816" server: "https://192.168.61.93:8443"
	I0930 21:07:46.318243   73375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:07:46.329751   73375 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.93
	I0930 21:07:46.329781   73375 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:07:46.329791   73375 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:07:46.329837   73375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:46.364302   73375 cri.go:89] found id: ""
	I0930 21:07:46.364392   73375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:07:46.384616   73375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:07:46.395855   73375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:07:46.395875   73375 kubeadm.go:157] found existing configuration files:
	
	I0930 21:07:46.395915   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:07:46.405860   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:07:46.405918   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:07:46.416618   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:07:46.426654   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:07:46.426712   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:07:46.435880   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:07:46.446273   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:07:46.446346   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:07:46.457099   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:07:46.467322   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:07:46.467386   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:07:46.477809   73375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:07:46.489024   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:46.605127   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.509287   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.708716   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.780830   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.883843   73375 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:07:47.883940   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.384688   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.884008   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.925804   73375 api_server.go:72] duration metric: took 1.041960261s to wait for apiserver process to appear ...
	I0930 21:07:48.925833   73375 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:07:48.925857   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:48.521282   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.521838   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Found IP for machine: 192.168.50.2
	I0930 21:07:48.521864   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Reserving static IP address...
	I0930 21:07:48.521876   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has current primary IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.522306   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Reserved static IP address: 192.168.50.2
	I0930 21:07:48.522349   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-291511", mac: "52:54:00:27:46:45", ip: "192.168.50.2"} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.522361   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for SSH to be available...
	I0930 21:07:48.522401   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | skip adding static IP to network mk-default-k8s-diff-port-291511 - found existing host DHCP lease matching {name: "default-k8s-diff-port-291511", mac: "52:54:00:27:46:45", ip: "192.168.50.2"}
	I0930 21:07:48.522427   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Getting to WaitForSSH function...
	I0930 21:07:48.525211   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.525641   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.525667   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.525827   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Using SSH client type: external
	I0930 21:07:48.525854   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa (-rw-------)
	I0930 21:07:48.525883   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:07:48.525900   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | About to run SSH command:
	I0930 21:07:48.525913   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | exit 0
	I0930 21:07:48.655656   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | SSH cmd err, output: <nil>: 
	I0930 21:07:48.656045   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetConfigRaw
	I0930 21:07:48.656789   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:48.659902   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.660358   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.660395   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.660586   73707 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/config.json ...
	I0930 21:07:48.660842   73707 machine.go:93] provisionDockerMachine start ...
	I0930 21:07:48.660866   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:48.661063   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.663782   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.664138   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.664165   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.664318   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.664567   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.664733   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.664868   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.665036   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.665283   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.665315   73707 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:07:48.776382   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:07:48.776414   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:48.776676   73707 buildroot.go:166] provisioning hostname "default-k8s-diff-port-291511"
	I0930 21:07:48.776711   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:48.776913   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.779952   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.780470   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.780516   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.780594   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.780773   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.780925   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.781080   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.781253   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.781457   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.781473   73707 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-291511 && echo "default-k8s-diff-port-291511" | sudo tee /etc/hostname
	I0930 21:07:48.913633   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-291511
	
	I0930 21:07:48.913724   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.916869   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.917280   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.917319   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.917501   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.917715   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.917882   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.918117   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.918296   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.918533   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.918562   73707 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-291511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-291511/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-291511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:07:49.048106   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:49.048141   73707 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:07:49.048182   73707 buildroot.go:174] setting up certificates
	I0930 21:07:49.048198   73707 provision.go:84] configureAuth start
	I0930 21:07:49.048212   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:49.048498   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:49.051299   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.051665   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.051702   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.051837   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.054211   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.054512   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.054540   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.054691   73707 provision.go:143] copyHostCerts
	I0930 21:07:49.054774   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:07:49.054789   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:07:49.054866   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:07:49.054982   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:07:49.054994   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:07:49.055021   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:07:49.055097   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:07:49.055106   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:07:49.055130   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:07:49.055189   73707 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-291511 san=[127.0.0.1 192.168.50.2 default-k8s-diff-port-291511 localhost minikube]
	I0930 21:07:49.239713   73707 provision.go:177] copyRemoteCerts
	I0930 21:07:49.239771   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:07:49.239796   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.242146   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.242468   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.242500   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.242663   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.242834   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.242982   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.243200   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.329405   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:07:49.358036   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0930 21:07:49.385742   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:07:49.409436   73707 provision.go:87] duration metric: took 361.22398ms to configureAuth
	I0930 21:07:49.409493   73707 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:07:49.409696   73707 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:49.409798   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.412572   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.412935   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.412975   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.413266   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.413476   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.413680   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.413821   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.414009   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:49.414199   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:49.414223   73707 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:07:49.635490   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:07:49.635553   73707 machine.go:96] duration metric: took 974.696002ms to provisionDockerMachine
	I0930 21:07:49.635567   73707 start.go:293] postStartSetup for "default-k8s-diff-port-291511" (driver="kvm2")
	I0930 21:07:49.635580   73707 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:07:49.635603   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.635954   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:07:49.635989   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.638867   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.639304   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.639340   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.639413   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.639631   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.639837   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.639995   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.728224   73707 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:07:49.732558   73707 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:07:49.732590   73707 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:07:49.732679   73707 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:07:49.732769   73707 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:07:49.732869   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:07:49.742783   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:49.766585   73707 start.go:296] duration metric: took 131.002562ms for postStartSetup
	I0930 21:07:49.766629   73707 fix.go:56] duration metric: took 20.430290493s for fixHost
	I0930 21:07:49.766652   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.769724   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.770143   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.770172   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.770461   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.770708   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.770872   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.771099   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.771240   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:49.771616   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:49.771636   73707 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:07:49.888863   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730469.865719956
	
	I0930 21:07:49.888889   73707 fix.go:216] guest clock: 1727730469.865719956
	I0930 21:07:49.888900   73707 fix.go:229] Guest: 2024-09-30 21:07:49.865719956 +0000 UTC Remote: 2024-09-30 21:07:49.76663417 +0000 UTC m=+259.507652750 (delta=99.085786ms)
	I0930 21:07:49.888943   73707 fix.go:200] guest clock delta is within tolerance: 99.085786ms
	I0930 21:07:49.888950   73707 start.go:83] releasing machines lock for "default-k8s-diff-port-291511", held for 20.552679126s
	I0930 21:07:49.888982   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.889242   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:49.892424   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.892817   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.892854   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.893030   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893601   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893780   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893852   73707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:07:49.893932   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.893934   73707 ssh_runner.go:195] Run: cat /version.json
	I0930 21:07:49.893985   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.896733   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.896843   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897130   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.897179   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897216   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.897233   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897471   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.897478   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.897679   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.897686   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.897825   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.897834   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.897954   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.898097   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:50.022951   73707 ssh_runner.go:195] Run: systemctl --version
	I0930 21:07:50.029177   73707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:07:50.186430   73707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:07:50.193205   73707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:07:50.193277   73707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:07:50.211330   73707 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:07:50.211365   73707 start.go:495] detecting cgroup driver to use...
	I0930 21:07:50.211430   73707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:07:50.227255   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:07:50.241404   73707 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:07:50.241468   73707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:07:50.257879   73707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:07:50.274595   73707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:07:50.394354   73707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:07:50.567503   73707 docker.go:233] disabling docker service ...
	I0930 21:07:50.567582   73707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:07:50.584390   73707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:07:50.600920   73707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:07:50.742682   73707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:07:50.882835   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:07:50.898340   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:07:50.919395   73707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:07:50.919464   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.930773   73707 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:07:50.930846   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.941870   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.952633   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.964281   73707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:07:50.977410   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.988423   73707 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:51.016091   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:51.027473   73707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:07:51.037470   73707 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:07:51.037537   73707 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:07:51.056841   73707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:07:51.068163   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:51.205357   73707 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:07:51.305327   73707 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:07:51.305410   73707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:07:51.311384   73707 start.go:563] Will wait 60s for crictl version
	I0930 21:07:51.311448   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:07:51.315965   73707 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:07:51.369329   73707 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:07:51.369417   73707 ssh_runner.go:195] Run: crio --version
	I0930 21:07:51.399897   73707 ssh_runner.go:195] Run: crio --version
	I0930 21:07:51.431075   73707 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:07:49.914747   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .Start
	I0930 21:07:49.914948   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring networks are active...
	I0930 21:07:49.915796   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network default is active
	I0930 21:07:49.916225   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network mk-old-k8s-version-621406 is active
	I0930 21:07:49.916890   73900 main.go:141] libmachine: (old-k8s-version-621406) Getting domain xml...
	I0930 21:07:49.917688   73900 main.go:141] libmachine: (old-k8s-version-621406) Creating domain...
	I0930 21:07:51.277867   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting to get IP...
	I0930 21:07:51.279001   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.279451   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.279552   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.279437   74917 retry.go:31] will retry after 307.582619ms: waiting for machine to come up
	I0930 21:07:51.589030   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.589414   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.589445   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.589368   74917 retry.go:31] will retry after 370.683214ms: waiting for machine to come up
	I0930 21:07:51.961914   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.962474   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.962511   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.962415   74917 retry.go:31] will retry after 428.703419ms: waiting for machine to come up
	I0930 21:07:52.393154   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.393682   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.393750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.393673   74917 retry.go:31] will retry after 514.254023ms: waiting for machine to come up
	I0930 21:07:52.334804   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:07:52.334846   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:07:52.334863   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.377601   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:07:52.377632   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:07:52.426784   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.473771   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:52.473811   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:52.926391   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.945122   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:52.945154   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:53.426295   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:53.434429   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:53.434464   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:53.926642   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:53.931501   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0930 21:07:53.940069   73375 api_server.go:141] control plane version: v1.31.1
	I0930 21:07:53.940104   73375 api_server.go:131] duration metric: took 5.014262318s to wait for apiserver health ...
	I0930 21:07:53.940115   73375 cni.go:84] Creating CNI manager for ""
	I0930 21:07:53.940123   73375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:53.941879   73375 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:07:53.943335   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:07:53.959585   73375 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:07:53.996310   73375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:07:54.010070   73375 system_pods.go:59] 8 kube-system pods found
	I0930 21:07:54.010129   73375 system_pods.go:61] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:07:54.010142   73375 system_pods.go:61] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:07:54.010157   73375 system_pods.go:61] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:07:54.010174   73375 system_pods.go:61] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:07:54.010186   73375 system_pods.go:61] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:07:54.010200   73375 system_pods.go:61] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:07:54.010212   73375 system_pods.go:61] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:07:54.010223   73375 system_pods.go:61] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:07:54.010232   73375 system_pods.go:74] duration metric: took 13.897885ms to wait for pod list to return data ...
	I0930 21:07:54.010244   73375 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:07:54.019651   73375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:07:54.019683   73375 node_conditions.go:123] node cpu capacity is 2
	I0930 21:07:54.019697   73375 node_conditions.go:105] duration metric: took 9.446744ms to run NodePressure ...
	I0930 21:07:54.019719   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:54.314348   73375 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:07:54.319583   73375 kubeadm.go:739] kubelet initialised
	I0930 21:07:54.319613   73375 kubeadm.go:740] duration metric: took 5.232567ms waiting for restarted kubelet to initialise ...
	I0930 21:07:54.319625   73375 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:07:54.326866   73375 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.333592   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.333628   73375 pod_ready.go:82] duration metric: took 6.72431ms for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.333640   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.333651   73375 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.340155   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "etcd-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.340194   73375 pod_ready.go:82] duration metric: took 6.533127ms for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.340208   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "etcd-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.340216   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.346494   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-apiserver-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.346530   73375 pod_ready.go:82] duration metric: took 6.304143ms for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.346542   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-apiserver-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.346551   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.403699   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.403731   73375 pod_ready.go:82] duration metric: took 57.168471ms for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.403743   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.403752   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.800372   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-proxy-klcv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.800410   73375 pod_ready.go:82] duration metric: took 396.646883ms for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.800423   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-proxy-klcv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.800432   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:51.432761   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:51.436278   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:51.436659   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:51.436700   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:51.436931   73707 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0930 21:07:51.441356   73707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:51.454358   73707 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-291511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:07:51.454484   73707 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:07:51.454547   73707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:51.502072   73707 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:07:51.502143   73707 ssh_runner.go:195] Run: which lz4
	I0930 21:07:51.506458   73707 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:07:51.510723   73707 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:07:51.510756   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 21:07:52.792488   73707 crio.go:462] duration metric: took 1.286075452s to copy over tarball
	I0930 21:07:52.792580   73707 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:07:55.207282   73707 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.414661305s)
	I0930 21:07:55.207314   73707 crio.go:469] duration metric: took 2.414793514s to extract the tarball
	I0930 21:07:55.207321   73707 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:07:55.244001   73707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:55.287097   73707 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 21:07:55.287124   73707 cache_images.go:84] Images are preloaded, skipping loading
	I0930 21:07:55.287133   73707 kubeadm.go:934] updating node { 192.168.50.2 8444 v1.31.1 crio true true} ...
	I0930 21:07:55.287277   73707 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-291511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:07:55.287384   73707 ssh_runner.go:195] Run: crio config
	I0930 21:07:55.200512   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-scheduler-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.200559   73375 pod_ready.go:82] duration metric: took 400.11341ms for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:55.200569   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-scheduler-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.200577   73375 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:55.601008   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.601042   73375 pod_ready.go:82] duration metric: took 400.453601ms for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:55.601055   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.601065   73375 pod_ready.go:39] duration metric: took 1.281429189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:07:55.601086   73375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:07:55.617767   73375 ops.go:34] apiserver oom_adj: -16
	I0930 21:07:55.617791   73375 kubeadm.go:597] duration metric: took 9.314187459s to restartPrimaryControlPlane
	I0930 21:07:55.617803   73375 kubeadm.go:394] duration metric: took 9.369220314s to StartCluster
	I0930 21:07:55.617824   73375 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.617913   73375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:07:55.619455   73375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.619760   73375 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:07:55.619842   73375 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:07:55.619959   73375 addons.go:69] Setting storage-provisioner=true in profile "no-preload-997816"
	I0930 21:07:55.619984   73375 addons.go:234] Setting addon storage-provisioner=true in "no-preload-997816"
	I0930 21:07:55.619974   73375 addons.go:69] Setting default-storageclass=true in profile "no-preload-997816"
	I0930 21:07:55.620003   73375 addons.go:69] Setting metrics-server=true in profile "no-preload-997816"
	I0930 21:07:55.620009   73375 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-997816"
	I0930 21:07:55.620020   73375 addons.go:234] Setting addon metrics-server=true in "no-preload-997816"
	W0930 21:07:55.620031   73375 addons.go:243] addon metrics-server should already be in state true
	I0930 21:07:55.620050   73375 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:55.620061   73375 host.go:66] Checking if "no-preload-997816" exists ...
	W0930 21:07:55.619994   73375 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:07:55.620124   73375 host.go:66] Checking if "no-preload-997816" exists ...
	I0930 21:07:55.620420   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620459   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.620494   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620535   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.620593   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620634   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.621682   73375 out.go:177] * Verifying Kubernetes components...
	I0930 21:07:55.623102   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:55.643690   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0930 21:07:55.643895   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35545
	I0930 21:07:55.644411   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.644553   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.644968   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.644981   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.645072   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.645078   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.645314   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.645502   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.645732   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.645777   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.645812   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.649244   73375 addons.go:234] Setting addon default-storageclass=true in "no-preload-997816"
	W0930 21:07:55.649262   73375 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:07:55.649283   73375 host.go:66] Checking if "no-preload-997816" exists ...
	I0930 21:07:55.649524   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.649548   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.671077   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42635
	I0930 21:07:55.671558   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.672193   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.672212   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.672505   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45163
	I0930 21:07:55.672736   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.672808   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I0930 21:07:55.673354   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.673396   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.673920   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.673926   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.674528   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.674545   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.674974   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.675624   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.675658   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.676078   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.676095   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.676547   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.676724   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.679115   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.681410   73375 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:55.688953   73375 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:07:55.688981   73375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:07:55.689015   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.693338   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.693996   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.694023   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.694212   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.694344   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.694444   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.694545   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.696037   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0930 21:07:55.696535   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.697185   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.697207   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.697567   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.697772   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.699797   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.700998   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0930 21:07:55.701429   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.702094   73375 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:07:52.909622   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.910169   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.910202   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.910132   74917 retry.go:31] will retry after 605.019848ms: waiting for machine to come up
	I0930 21:07:53.517276   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:53.517911   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:53.517943   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:53.517858   74917 retry.go:31] will retry after 856.018614ms: waiting for machine to come up
	I0930 21:07:54.376343   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:54.376838   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:54.376862   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:54.376794   74917 retry.go:31] will retry after 740.749778ms: waiting for machine to come up
	I0930 21:07:55.119090   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:55.119631   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:55.119660   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:55.119583   74917 retry.go:31] will retry after 1.444139076s: waiting for machine to come up
	I0930 21:07:56.566261   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:56.566744   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:56.566771   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:56.566695   74917 retry.go:31] will retry after 1.681362023s: waiting for machine to come up
	I0930 21:07:55.703687   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:07:55.703709   73375 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:07:55.703736   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.703788   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.703816   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.704295   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.704553   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.707029   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.707365   73375 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:07:55.707385   73375 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:07:55.707408   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.708091   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.708606   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.708629   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.709024   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.709237   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.709388   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.709573   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.711123   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.711607   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.711631   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.711987   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.712178   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.712318   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.712469   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.888447   73375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:55.912060   73375 node_ready.go:35] waiting up to 6m0s for node "no-preload-997816" to be "Ready" ...
	I0930 21:07:56.010903   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:07:56.012576   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:07:56.012601   73375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:07:56.038592   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:07:56.055481   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:07:56.055513   73375 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:07:56.131820   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:07:56.131844   73375 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:07:56.213605   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:07:57.078385   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.067447636s)
	I0930 21:07:57.078439   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.078451   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.078770   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.078823   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:57.078836   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.078845   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.078793   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:57.079118   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:57.079149   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.079157   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:57.672706   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.672737   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.673053   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.673072   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:58.301165   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:59.072488   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.858837368s)
	I0930 21:07:59.072565   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.072582   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.072921   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.072986   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073029   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073038   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.073221   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.034599023s)
	I0930 21:07:59.073271   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073344   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.073383   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073397   73375 addons.go:475] Verifying addon metrics-server=true in "no-preload-997816"
	I0930 21:07:59.073347   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.073754   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:59.073804   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.073819   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073834   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073846   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.075323   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:59.075329   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.075353   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.077687   73375 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0930 21:07:59.079278   73375 addons.go:510] duration metric: took 3.459453938s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0930 21:07:55.346656   73707 cni.go:84] Creating CNI manager for ""
	I0930 21:07:55.346679   73707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:55.346688   73707 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:07:55.346718   73707 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-291511 NodeName:default-k8s-diff-port-291511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:07:55.346847   73707 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-291511"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:07:55.346903   73707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:07:55.356645   73707 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:07:55.356708   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:07:55.366457   73707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0930 21:07:55.384639   73707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:07:55.403208   73707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0930 21:07:55.421878   73707 ssh_runner.go:195] Run: grep 192.168.50.2	control-plane.minikube.internal$ /etc/hosts
	I0930 21:07:55.425803   73707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:55.439370   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:55.553575   73707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:55.570754   73707 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511 for IP: 192.168.50.2
	I0930 21:07:55.570787   73707 certs.go:194] generating shared ca certs ...
	I0930 21:07:55.570808   73707 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.571011   73707 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:07:55.571067   73707 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:07:55.571083   73707 certs.go:256] generating profile certs ...
	I0930 21:07:55.571178   73707 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/client.key
	I0930 21:07:55.571270   73707 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.key.2e3224d9
	I0930 21:07:55.571326   73707 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.key
	I0930 21:07:55.571464   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:07:55.571510   73707 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:07:55.571522   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:07:55.571587   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:07:55.571627   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:07:55.571655   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:07:55.571719   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:55.572367   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:07:55.606278   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:07:55.645629   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:07:55.690514   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:07:55.737445   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 21:07:55.773656   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 21:07:55.804015   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:07:55.830210   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:07:55.857601   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:07:55.887765   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:07:55.922053   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:07:55.951040   73707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:07:55.969579   73707 ssh_runner.go:195] Run: openssl version
	I0930 21:07:55.975576   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:07:55.987255   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:07:55.993657   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:07:55.993723   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:07:56.001878   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:07:56.017528   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:07:56.030398   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.035552   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.035625   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.043878   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:07:56.055384   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:07:56.066808   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.073099   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.073164   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.081343   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:07:56.096669   73707 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:07:56.102635   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:07:56.110805   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:07:56.118533   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:07:56.125800   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:07:56.133985   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:07:56.142109   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:07:56.150433   73707 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-291511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:07:56.150538   73707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:07:56.150608   73707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:56.197936   73707 cri.go:89] found id: ""
	I0930 21:07:56.198016   73707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:07:56.208133   73707 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:07:56.208155   73707 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:07:56.208204   73707 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:07:56.218880   73707 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:07:56.220322   73707 kubeconfig.go:125] found "default-k8s-diff-port-291511" server: "https://192.168.50.2:8444"
	I0930 21:07:56.223557   73707 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:07:56.233844   73707 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.2
	I0930 21:07:56.233876   73707 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:07:56.233889   73707 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:07:56.233970   73707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:56.280042   73707 cri.go:89] found id: ""
	I0930 21:07:56.280129   73707 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:07:56.304291   73707 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:07:56.317987   73707 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:07:56.318012   73707 kubeadm.go:157] found existing configuration files:
	
	I0930 21:07:56.318076   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0930 21:07:56.331377   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:07:56.331448   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:07:56.342380   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0930 21:07:56.354949   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:07:56.355030   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:07:56.368385   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0930 21:07:56.378798   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:07:56.378883   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:07:56.390167   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0930 21:07:56.400338   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:07:56.400413   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:07:56.410735   73707 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:07:56.426910   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:56.557126   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.682738   73707 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125574645s)
	I0930 21:07:57.682777   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.908684   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.983925   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:58.088822   73707 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:07:58.088930   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:58.589565   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:59.089483   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:59.110240   73707 api_server.go:72] duration metric: took 1.021416929s to wait for apiserver process to appear ...
	I0930 21:07:59.110279   73707 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:07:59.110328   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:07:59.110843   73707 api_server.go:269] stopped: https://192.168.50.2:8444/healthz: Get "https://192.168.50.2:8444/healthz": dial tcp 192.168.50.2:8444: connect: connection refused
	I0930 21:07:59.611045   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:07:58.250468   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:58.251041   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:58.251062   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:58.250979   74917 retry.go:31] will retry after 2.260492343s: waiting for machine to come up
	I0930 21:08:00.513613   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:00.514129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:00.514194   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:00.514117   74917 retry.go:31] will retry after 2.449694064s: waiting for machine to come up
	I0930 21:08:02.200888   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:02.200918   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:02.200930   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:02.240477   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:02.240513   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:02.611111   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:02.615548   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:02.615578   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:03.111216   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:03.118078   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:03.118102   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:03.610614   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:03.615203   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 200:
	ok
	I0930 21:08:03.621652   73707 api_server.go:141] control plane version: v1.31.1
	I0930 21:08:03.621680   73707 api_server.go:131] duration metric: took 4.511393989s to wait for apiserver health ...
	I0930 21:08:03.621689   73707 cni.go:84] Creating CNI manager for ""
	I0930 21:08:03.621694   73707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:03.624026   73707 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:08:00.416356   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:08:02.416469   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:08:02.916643   73375 node_ready.go:49] node "no-preload-997816" has status "Ready":"True"
	I0930 21:08:02.916668   73375 node_ready.go:38] duration metric: took 7.004576501s for node "no-preload-997816" to be "Ready" ...
	I0930 21:08:02.916679   73375 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:02.922833   73375 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:02.928873   73375 pod_ready.go:93] pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:02.928895   73375 pod_ready.go:82] duration metric: took 6.034388ms for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:02.928904   73375 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.934668   73375 pod_ready.go:103] pod "etcd-no-preload-997816" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:03.625416   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:08:03.640241   73707 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:08:03.664231   73707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:08:03.679372   73707 system_pods.go:59] 8 kube-system pods found
	I0930 21:08:03.679409   73707 system_pods.go:61] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:08:03.679425   73707 system_pods.go:61] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:08:03.679435   73707 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:08:03.679447   73707 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:08:03.679456   73707 system_pods.go:61] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 21:08:03.679466   73707 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:08:03.679472   73707 system_pods.go:61] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:08:03.679482   73707 system_pods.go:61] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 21:08:03.679490   73707 system_pods.go:74] duration metric: took 15.234407ms to wait for pod list to return data ...
	I0930 21:08:03.679509   73707 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:08:03.698332   73707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:08:03.698363   73707 node_conditions.go:123] node cpu capacity is 2
	I0930 21:08:03.698374   73707 node_conditions.go:105] duration metric: took 18.857709ms to run NodePressure ...
	I0930 21:08:03.698394   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:03.968643   73707 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:08:03.974075   73707 kubeadm.go:739] kubelet initialised
	I0930 21:08:03.974098   73707 kubeadm.go:740] duration metric: took 5.424573ms waiting for restarted kubelet to initialise ...
	I0930 21:08:03.974105   73707 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:03.982157   73707 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:03.989298   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.989329   73707 pod_ready.go:82] duration metric: took 7.140381ms for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:03.989338   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.989345   73707 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:03.995739   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.995773   73707 pod_ready.go:82] duration metric: took 6.418854ms for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:03.995787   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.995797   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.002071   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.002093   73707 pod_ready.go:82] duration metric: took 6.287919ms for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.002104   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.002110   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.071732   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.071760   73707 pod_ready.go:82] duration metric: took 69.643681ms for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.071771   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.071777   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.468580   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-proxy-kwp22" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.468605   73707 pod_ready.go:82] duration metric: took 396.820558ms for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.468614   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-proxy-kwp22" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.468620   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.868042   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.868067   73707 pod_ready.go:82] duration metric: took 399.438278ms for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.868078   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.868085   73707 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.267893   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:05.267925   73707 pod_ready.go:82] duration metric: took 399.831615ms for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:05.267937   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:05.267945   73707 pod_ready.go:39] duration metric: took 1.293832472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:05.267960   73707 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:08:05.282162   73707 ops.go:34] apiserver oom_adj: -16
	I0930 21:08:05.282188   73707 kubeadm.go:597] duration metric: took 9.074027172s to restartPrimaryControlPlane
	I0930 21:08:05.282199   73707 kubeadm.go:394] duration metric: took 9.131777336s to StartCluster
	I0930 21:08:05.282216   73707 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:05.282338   73707 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:08:05.283862   73707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:05.284135   73707 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:08:05.284201   73707 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:08:05.284287   73707 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284305   73707 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.284313   73707 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:08:05.284340   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.284339   73707 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284385   73707 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:08:05.284399   73707 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-291511"
	I0930 21:08:05.284359   73707 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284432   73707 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.284448   73707 addons.go:243] addon metrics-server should already be in state true
	I0930 21:08:05.284486   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.284739   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284760   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284784   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.284794   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.284890   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284931   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.286020   73707 out.go:177] * Verifying Kubernetes components...
	I0930 21:08:05.287268   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:05.302045   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0930 21:08:05.302587   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.303190   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.303219   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.303631   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.304213   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.304258   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.304484   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0930 21:08:05.304676   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0930 21:08:05.304884   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.305175   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.305353   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.305377   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.305642   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.305660   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.305724   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.305933   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.306016   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.306580   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.306623   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.309757   73707 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.309778   73707 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:08:05.309805   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.310163   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.310208   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.320335   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0930 21:08:05.320928   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.321496   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.321520   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.321922   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.322082   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.324111   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.325867   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0930 21:08:05.325879   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0930 21:08:05.326252   73707 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:08:05.326337   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.326280   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.326847   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.326862   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.326982   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.326999   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.327239   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.327313   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.327467   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:08:05.327485   73707 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:08:05.327507   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.327597   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.327778   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.327806   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.329862   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.331454   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.331654   73707 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:05.331959   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.331996   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.332184   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.332355   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.332577   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.332699   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.332956   73707 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:08:05.332972   73707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:08:05.332990   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.336234   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.336634   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.336661   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.336885   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.337134   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.337271   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.337447   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.345334   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34613
	I0930 21:08:05.345908   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.346393   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.346424   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.346749   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.346887   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.348836   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.349033   73707 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:08:05.349048   73707 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:08:05.349067   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.351835   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.352222   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.352277   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.352401   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.352644   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.352786   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.352886   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.475274   73707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:05.496035   73707 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-291511" to be "Ready" ...
	I0930 21:08:05.564715   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:08:05.574981   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:08:05.575006   73707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:08:05.613799   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:08:05.613822   73707 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:08:05.618503   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:08:05.689563   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:08:05.689588   73707 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:08:05.769327   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:08:06.831657   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266911261s)
	I0930 21:08:06.831717   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.831727   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.831735   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.213199657s)
	I0930 21:08:06.831780   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.831797   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832054   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832071   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832079   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.832086   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832146   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832164   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832182   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832195   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.832203   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832291   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832305   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832316   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832477   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832483   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832512   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.838509   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.838534   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.838786   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.838801   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.838806   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.956747   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.187373699s)
	I0930 21:08:06.956803   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.956819   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.957097   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.958516   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.958531   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.958542   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.958548   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.958842   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.958863   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.958873   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.958875   73707 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-291511"
	I0930 21:08:06.961299   73707 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0930 21:08:02.965767   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:02.966135   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:02.966157   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:02.966086   74917 retry.go:31] will retry after 2.951226221s: waiting for machine to come up
	I0930 21:08:05.919389   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:05.919894   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:05.919937   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:05.919827   74917 retry.go:31] will retry after 2.747969391s: waiting for machine to come up
	I0930 21:08:09.916514   73256 start.go:364] duration metric: took 52.875691449s to acquireMachinesLock for "embed-certs-256103"
	I0930 21:08:09.916583   73256 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:08:09.916592   73256 fix.go:54] fixHost starting: 
	I0930 21:08:09.916972   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:09.917000   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:09.935009   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0930 21:08:09.935493   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:09.936052   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:08:09.936073   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:09.936443   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:09.936617   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:09.936762   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:08:09.938608   73256 fix.go:112] recreateIfNeeded on embed-certs-256103: state=Stopped err=<nil>
	I0930 21:08:09.938639   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	W0930 21:08:09.938811   73256 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:08:09.940789   73256 out.go:177] * Restarting existing kvm2 VM for "embed-certs-256103" ...
	I0930 21:08:05.936626   73375 pod_ready.go:93] pod "etcd-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:05.936660   73375 pod_ready.go:82] duration metric: took 3.007747597s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.936674   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.942154   73375 pod_ready.go:93] pod "kube-apiserver-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:05.942196   73375 pod_ready.go:82] duration metric: took 5.502965ms for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.942209   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.949366   73375 pod_ready.go:93] pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.949402   73375 pod_ready.go:82] duration metric: took 1.007183809s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.949413   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.955060   73375 pod_ready.go:93] pod "kube-proxy-klcv8" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.955088   73375 pod_ready.go:82] duration metric: took 5.667172ms for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.955100   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.961684   73375 pod_ready.go:93] pod "kube-scheduler-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.961706   73375 pod_ready.go:82] duration metric: took 6.597856ms for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.961718   73375 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:08.967525   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:06.962594   73707 addons.go:510] duration metric: took 1.678396512s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0930 21:08:07.499805   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:09.500771   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:08.671179   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.671686   73900 main.go:141] libmachine: (old-k8s-version-621406) Found IP for machine: 192.168.72.159
	I0930 21:08:08.671711   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserving static IP address...
	I0930 21:08:08.671729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has current primary IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.672178   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.672220   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | skip adding static IP to network mk-old-k8s-version-621406 - found existing host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"}
	I0930 21:08:08.672231   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserved static IP address: 192.168.72.159
	I0930 21:08:08.672246   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting for SSH to be available...
	I0930 21:08:08.672254   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Getting to WaitForSSH function...
	I0930 21:08:08.674566   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.674931   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.674969   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.675128   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH client type: external
	I0930 21:08:08.675170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa (-rw-------)
	I0930 21:08:08.675212   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:08:08.675229   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | About to run SSH command:
	I0930 21:08:08.675244   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | exit 0
	I0930 21:08:08.799368   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | SSH cmd err, output: <nil>: 
	I0930 21:08:08.799751   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetConfigRaw
	I0930 21:08:08.800421   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:08.803151   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803596   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.803620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803922   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:08:08.804195   73900 machine.go:93] provisionDockerMachine start ...
	I0930 21:08:08.804246   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:08.804502   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.806822   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807240   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.807284   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807521   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.807735   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.807890   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.808077   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.808239   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.808480   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.808493   73900 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:08:08.912058   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:08:08.912135   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912407   73900 buildroot.go:166] provisioning hostname "old-k8s-version-621406"
	I0930 21:08:08.912432   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912662   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.915366   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.915750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915892   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.916107   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916330   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916492   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.916673   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.916932   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.916957   73900 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-621406 && echo "old-k8s-version-621406" | sudo tee /etc/hostname
	I0930 21:08:09.034260   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-621406
	
	I0930 21:08:09.034296   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.037149   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037509   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.037538   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037799   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.037986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038163   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038327   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.038473   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.038695   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.038714   73900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-621406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-621406/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-621406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:08:09.152190   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:08:09.152228   73900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:08:09.152255   73900 buildroot.go:174] setting up certificates
	I0930 21:08:09.152275   73900 provision.go:84] configureAuth start
	I0930 21:08:09.152288   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:09.152577   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.155203   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155589   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.155620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155783   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.157964   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158362   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.158392   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158520   73900 provision.go:143] copyHostCerts
	I0930 21:08:09.158592   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:08:09.158605   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:08:09.158704   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:08:09.158851   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:08:09.158864   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:08:09.158895   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:08:09.158970   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:08:09.158977   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:08:09.158996   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:08:09.159054   73900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-621406 san=[127.0.0.1 192.168.72.159 localhost minikube old-k8s-version-621406]
	I0930 21:08:09.301267   73900 provision.go:177] copyRemoteCerts
	I0930 21:08:09.301322   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:08:09.301349   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.304344   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304766   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.304796   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304998   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.305187   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.305321   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.305439   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.390851   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0930 21:08:09.415712   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 21:08:09.439567   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:08:09.463427   73900 provision.go:87] duration metric: took 311.139024ms to configureAuth
	I0930 21:08:09.463459   73900 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:08:09.463713   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:08:09.463809   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.466757   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.467160   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467326   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.467513   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467694   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467843   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.468004   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.468175   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.468190   73900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:08:09.684657   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:08:09.684684   73900 machine.go:96] duration metric: took 880.473418ms to provisionDockerMachine
	I0930 21:08:09.684698   73900 start.go:293] postStartSetup for "old-k8s-version-621406" (driver="kvm2")
	I0930 21:08:09.684709   73900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:08:09.684730   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.685075   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:08:09.685114   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.688051   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688517   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.688542   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688725   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.688928   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.689070   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.689265   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.770572   73900 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:08:09.775149   73900 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:08:09.775181   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:08:09.775268   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:08:09.775364   73900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:08:09.775453   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:08:09.784753   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:09.807989   73900 start.go:296] duration metric: took 123.276522ms for postStartSetup
	I0930 21:08:09.808033   73900 fix.go:56] duration metric: took 19.918922935s for fixHost
	I0930 21:08:09.808053   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.811242   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811656   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.811692   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811852   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.812064   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812239   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812380   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.812522   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.812704   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.812719   73900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:08:09.916349   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730489.889323893
	
	I0930 21:08:09.916376   73900 fix.go:216] guest clock: 1727730489.889323893
	I0930 21:08:09.916384   73900 fix.go:229] Guest: 2024-09-30 21:08:09.889323893 +0000 UTC Remote: 2024-09-30 21:08:09.808037625 +0000 UTC m=+267.093327666 (delta=81.286268ms)
	I0930 21:08:09.916403   73900 fix.go:200] guest clock delta is within tolerance: 81.286268ms
	I0930 21:08:09.916408   73900 start.go:83] releasing machines lock for "old-k8s-version-621406", held for 20.027328296s
	I0930 21:08:09.916440   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.916766   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.919729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920070   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.920105   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920238   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.920831   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921050   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921182   73900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:08:09.921235   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.921328   73900 ssh_runner.go:195] Run: cat /version.json
	I0930 21:08:09.921351   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.924258   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924650   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.924695   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924805   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.924986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.925176   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925206   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.925341   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.925405   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.925534   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925698   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925829   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:10.043500   73900 ssh_runner.go:195] Run: systemctl --version
	I0930 21:08:10.051029   73900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:08:10.199844   73900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:08:10.206433   73900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:08:10.206519   73900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:08:10.223346   73900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:08:10.223375   73900 start.go:495] detecting cgroup driver to use...
	I0930 21:08:10.223449   73900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:08:10.241056   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:08:10.257197   73900 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:08:10.257261   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:08:10.271847   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:08:10.287465   73900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:08:10.419248   73900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:08:10.583440   73900 docker.go:233] disabling docker service ...
	I0930 21:08:10.583518   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:08:10.599561   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:08:10.613321   73900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:08:10.763071   73900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:08:10.891222   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:08:10.906985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:08:10.927838   73900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0930 21:08:10.927911   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.940002   73900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:08:10.940084   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.953143   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.965922   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.985782   73900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:08:11.001825   73900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:08:11.015777   73900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:08:11.015835   73900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:08:11.034821   73900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:08:11.049855   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:11.203755   73900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:08:11.312949   73900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:08:11.313060   73900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:08:11.319280   73900 start.go:563] Will wait 60s for crictl version
	I0930 21:08:11.319355   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:11.323826   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:08:11.374934   73900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:08:11.375023   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.415466   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.449622   73900 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0930 21:08:11.450773   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:11.454019   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454504   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:11.454534   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454807   73900 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0930 21:08:11.459034   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:11.473162   73900 kubeadm.go:883] updating cluster {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:08:11.473294   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:08:11.473367   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:11.518200   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:11.518275   73900 ssh_runner.go:195] Run: which lz4
	I0930 21:08:11.522442   73900 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:08:11.526704   73900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:08:11.526752   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0930 21:08:09.942356   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Start
	I0930 21:08:09.942591   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring networks are active...
	I0930 21:08:09.943619   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring network default is active
	I0930 21:08:09.944145   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring network mk-embed-certs-256103 is active
	I0930 21:08:09.944659   73256 main.go:141] libmachine: (embed-certs-256103) Getting domain xml...
	I0930 21:08:09.945567   73256 main.go:141] libmachine: (embed-certs-256103) Creating domain...
	I0930 21:08:11.376075   73256 main.go:141] libmachine: (embed-certs-256103) Waiting to get IP...
	I0930 21:08:11.377049   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.377588   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.377687   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.377579   75193 retry.go:31] will retry after 219.057799ms: waiting for machine to come up
	I0930 21:08:11.598062   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.598531   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.598568   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.598491   75193 retry.go:31] will retry after 288.150233ms: waiting for machine to come up
	I0930 21:08:11.887894   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.888719   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.888749   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.888678   75193 retry.go:31] will retry after 422.70153ms: waiting for machine to come up
	I0930 21:08:12.313280   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:12.313761   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:12.313790   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:12.313728   75193 retry.go:31] will retry after 403.507934ms: waiting for machine to come up
	I0930 21:08:12.719305   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:12.719705   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:12.719740   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:12.719683   75193 retry.go:31] will retry after 616.261723ms: waiting for machine to come up
	I0930 21:08:13.337223   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:13.337759   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:13.337809   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:13.337727   75193 retry.go:31] will retry after 715.496762ms: waiting for machine to come up
	I0930 21:08:14.054455   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:14.055118   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:14.055155   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:14.055041   75193 retry.go:31] will retry after 1.12512788s: waiting for machine to come up
	I0930 21:08:10.970621   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:13.468795   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:11.501276   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:12.501748   73707 node_ready.go:49] node "default-k8s-diff-port-291511" has status "Ready":"True"
	I0930 21:08:12.501784   73707 node_ready.go:38] duration metric: took 7.005705696s for node "default-k8s-diff-port-291511" to be "Ready" ...
	I0930 21:08:12.501797   73707 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:12.510080   73707 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:12.518496   73707 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:12.518522   73707 pod_ready.go:82] duration metric: took 8.414761ms for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:12.518535   73707 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.526615   73707 pod_ready.go:93] pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:14.526653   73707 pod_ready.go:82] duration metric: took 2.00810944s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.526666   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.533536   73707 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:14.533574   73707 pod_ready.go:82] duration metric: took 6.898769ms for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.533596   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.043003   73707 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.043034   73707 pod_ready.go:82] duration metric: took 509.429109ms for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.043048   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.049645   73707 pod_ready.go:93] pod "kube-proxy-kwp22" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.049676   73707 pod_ready.go:82] duration metric: took 6.618441ms for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.049688   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:13.134916   73900 crio.go:462] duration metric: took 1.612498859s to copy over tarball
	I0930 21:08:13.135038   73900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:08:16.170053   73900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.034985922s)
	I0930 21:08:16.170080   73900 crio.go:469] duration metric: took 3.035125251s to extract the tarball
	I0930 21:08:16.170088   73900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:08:16.213559   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:16.249853   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:16.249876   73900 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 21:08:16.249943   73900 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.249970   73900 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.249987   73900 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.250030   73900 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0930 21:08:16.250031   73900 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.250047   73900 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.250049   73900 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.250083   73900 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0930 21:08:16.251771   73900 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.251768   73900 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.251832   73900 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.251854   73900 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251891   73900 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.252031   73900 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.456847   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.468006   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0930 21:08:16.516253   73900 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0930 21:08:16.516294   73900 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.516336   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.524699   73900 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0930 21:08:16.524743   73900 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0930 21:08:16.524787   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.525738   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.529669   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.561946   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.569090   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.570589   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.571007   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.581971   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.587609   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.630323   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.711058   73900 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0930 21:08:16.711124   73900 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.711190   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.749473   73900 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0930 21:08:16.749521   73900 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.749585   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.769974   73900 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0930 21:08:16.770016   73900 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.770050   73900 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0930 21:08:16.770075   73900 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0930 21:08:16.770087   73900 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.770104   73900 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.770142   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770160   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.770064   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770144   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.788241   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.788292   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.788294   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.788339   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.847727   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0930 21:08:16.847798   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.847894   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.938964   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.939000   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.939053   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0930 21:08:16.939090   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.965556   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.965620   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.020497   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:17.074893   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:17.074950   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.090489   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:17.174117   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0930 21:08:17.174183   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0930 21:08:17.185553   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0930 21:08:17.185619   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0930 21:08:17.506064   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:17.650598   73900 cache_images.go:92] duration metric: took 1.400704992s to LoadCachedImages
	W0930 21:08:17.650695   73900 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0930 21:08:17.650710   73900 kubeadm.go:934] updating node { 192.168.72.159 8443 v1.20.0 crio true true} ...
	I0930 21:08:17.650834   73900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-621406 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:08:17.650922   73900 ssh_runner.go:195] Run: crio config
	I0930 21:08:17.710096   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:08:17.710124   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:17.710139   73900 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:08:17.710164   73900 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-621406 NodeName:old-k8s-version-621406 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0930 21:08:17.710349   73900 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-621406"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:08:17.710425   73900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0930 21:08:17.721028   73900 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:08:17.721111   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:08:17.731462   73900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0930 21:08:17.749715   73900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:08:15.182186   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:15.182722   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:15.182751   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:15.182673   75193 retry.go:31] will retry after 1.385891549s: waiting for machine to come up
	I0930 21:08:16.569882   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:16.570365   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:16.570386   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:16.570309   75193 retry.go:31] will retry after 1.417579481s: waiting for machine to come up
	I0930 21:08:17.989161   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:17.989876   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:17.989905   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:17.989818   75193 retry.go:31] will retry after 1.981651916s: waiting for machine to come up
	I0930 21:08:15.471221   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:17.969140   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:19.969688   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:15.300639   73707 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.300666   73707 pod_ready.go:82] duration metric: took 250.968899ms for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.300679   73707 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:17.349449   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:19.809813   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:17.767565   73900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0930 21:08:17.786411   73900 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0930 21:08:17.790338   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:17.803957   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:17.948898   73900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:17.969102   73900 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406 for IP: 192.168.72.159
	I0930 21:08:17.969133   73900 certs.go:194] generating shared ca certs ...
	I0930 21:08:17.969150   73900 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:17.969338   73900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:08:17.969387   73900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:08:17.969400   73900 certs.go:256] generating profile certs ...
	I0930 21:08:17.969543   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/client.key
	I0930 21:08:17.969621   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key.f3dc5056
	I0930 21:08:17.969674   73900 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key
	I0930 21:08:17.969833   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:08:17.969875   73900 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:08:17.969886   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:08:17.969926   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:08:17.969961   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:08:17.969999   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:08:17.970055   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:17.970794   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:08:18.007954   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:08:18.041538   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:08:18.077886   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:08:18.118644   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0930 21:08:18.151418   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:08:18.199572   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:08:18.235795   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:08:18.272729   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:08:18.298727   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:08:18.324074   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:08:18.351209   73900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:08:18.372245   73900 ssh_runner.go:195] Run: openssl version
	I0930 21:08:18.380047   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:08:18.395332   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401407   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401479   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.407744   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:08:18.422801   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:08:18.437946   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443864   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443938   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.451554   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:08:18.466856   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:08:18.479324   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484321   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484383   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.490341   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:08:18.503117   73900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:08:18.507986   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:08:18.514974   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:08:18.522140   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:08:18.529366   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:08:18.536056   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:08:18.542787   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:08:18.550311   73900 kubeadm.go:392] StartCluster: {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:08:18.550431   73900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:08:18.550498   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.593041   73900 cri.go:89] found id: ""
	I0930 21:08:18.593116   73900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:08:18.603410   73900 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:08:18.603432   73900 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:08:18.603479   73900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:08:18.614635   73900 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:08:18.615758   73900 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-621406" does not appear in /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:08:18.616488   73900 kubeconfig.go:62] /home/jenkins/minikube-integration/19736-7672/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-621406" cluster setting kubeconfig missing "old-k8s-version-621406" context setting]
	I0930 21:08:18.617394   73900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:18.644144   73900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:08:18.655764   73900 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.159
	I0930 21:08:18.655806   73900 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:08:18.655819   73900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:08:18.655877   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.699283   73900 cri.go:89] found id: ""
	I0930 21:08:18.699376   73900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:08:18.715248   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:08:18.724905   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:08:18.724945   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:08:18.724990   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:08:18.735611   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:08:18.735682   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:08:18.745604   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:08:18.755199   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:08:18.755261   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:08:18.765450   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.775187   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:08:18.775268   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.788080   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:08:18.800668   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:08:18.800727   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:08:18.814084   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:08:18.823785   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:18.961698   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.495418   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.713653   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.812667   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.921314   73900 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:08:19.921414   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.422349   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.922222   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.422364   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.921493   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:22.421640   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:19.973478   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:19.973916   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:19.973946   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:19.973868   75193 retry.go:31] will retry after 2.33355272s: waiting for machine to come up
	I0930 21:08:22.308828   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:22.309471   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:22.309498   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:22.309367   75193 retry.go:31] will retry after 3.484225075s: waiting for machine to come up
	I0930 21:08:21.970954   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:24.467778   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:22.310464   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:24.806425   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:22.922418   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.421851   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.921502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.422346   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.922000   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.422290   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.922213   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.422100   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.922239   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:27.421729   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.795265   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:25.795755   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:25.795781   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:25.795707   75193 retry.go:31] will retry after 2.983975719s: waiting for machine to come up
	I0930 21:08:28.780767   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.781201   73256 main.go:141] libmachine: (embed-certs-256103) Found IP for machine: 192.168.39.90
	I0930 21:08:28.781223   73256 main.go:141] libmachine: (embed-certs-256103) Reserving static IP address...
	I0930 21:08:28.781237   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has current primary IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.781655   73256 main.go:141] libmachine: (embed-certs-256103) Reserved static IP address: 192.168.39.90
	I0930 21:08:28.781679   73256 main.go:141] libmachine: (embed-certs-256103) Waiting for SSH to be available...
	I0930 21:08:28.781697   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "embed-certs-256103", mac: "52:54:00:7a:01:01", ip: "192.168.39.90"} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.781724   73256 main.go:141] libmachine: (embed-certs-256103) DBG | skip adding static IP to network mk-embed-certs-256103 - found existing host DHCP lease matching {name: "embed-certs-256103", mac: "52:54:00:7a:01:01", ip: "192.168.39.90"}
	I0930 21:08:28.781735   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Getting to WaitForSSH function...
	I0930 21:08:28.784310   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.784703   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.784737   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.784861   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Using SSH client type: external
	I0930 21:08:28.784899   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa (-rw-------)
	I0930 21:08:28.784933   73256 main.go:141] libmachine: (embed-certs-256103) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:08:28.784953   73256 main.go:141] libmachine: (embed-certs-256103) DBG | About to run SSH command:
	I0930 21:08:28.784970   73256 main.go:141] libmachine: (embed-certs-256103) DBG | exit 0
	I0930 21:08:28.911300   73256 main.go:141] libmachine: (embed-certs-256103) DBG | SSH cmd err, output: <nil>: 
	I0930 21:08:28.911716   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetConfigRaw
	I0930 21:08:28.912335   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:28.914861   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.915283   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.915304   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.915620   73256 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/config.json ...
	I0930 21:08:28.915874   73256 machine.go:93] provisionDockerMachine start ...
	I0930 21:08:28.915902   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:28.916117   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:28.918357   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.918661   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.918696   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.918813   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:28.918992   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:28.919143   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:28.919296   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:28.919472   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:28.919680   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:28.919691   73256 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:08:29.032537   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:08:29.032579   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.032830   73256 buildroot.go:166] provisioning hostname "embed-certs-256103"
	I0930 21:08:29.032857   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.033039   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.035951   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.036403   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.036435   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.036598   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.036795   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.037002   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.037175   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.037339   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.037538   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.037556   73256 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-256103 && echo "embed-certs-256103" | sudo tee /etc/hostname
	I0930 21:08:29.163250   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-256103
	
	I0930 21:08:29.163278   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.165937   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.166260   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.166296   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.166529   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.166722   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.166913   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.167055   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.167223   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.167454   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.167477   73256 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-256103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-256103/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-256103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:08:29.288197   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:08:29.288236   73256 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:08:29.288292   73256 buildroot.go:174] setting up certificates
	I0930 21:08:29.288307   73256 provision.go:84] configureAuth start
	I0930 21:08:29.288322   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.288589   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:29.291598   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.292026   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.292059   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.292247   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.294760   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.295144   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.295169   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.295421   73256 provision.go:143] copyHostCerts
	I0930 21:08:29.295497   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:08:29.295510   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:08:29.295614   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:08:29.295743   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:08:29.295754   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:08:29.295782   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:08:29.295855   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:08:29.295864   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:08:29.295886   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:08:29.295948   73256 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.embed-certs-256103 san=[127.0.0.1 192.168.39.90 embed-certs-256103 localhost minikube]
	I0930 21:08:26.468058   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:28.468510   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:26.808360   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:29.307500   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:29.742069   73256 provision.go:177] copyRemoteCerts
	I0930 21:08:29.742134   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:08:29.742156   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.745411   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.745805   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.745835   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.746023   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.746215   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.746351   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.746557   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:29.833888   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:08:29.857756   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0930 21:08:29.883087   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:08:29.905795   73256 provision.go:87] duration metric: took 617.470984ms to configureAuth
	I0930 21:08:29.905831   73256 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:08:29.906028   73256 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:08:29.906098   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.908911   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.909307   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.909335   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.909524   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.909711   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.909876   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.909996   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.910157   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.910429   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.910454   73256 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:08:30.140191   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:08:30.140217   73256 machine.go:96] duration metric: took 1.224326296s to provisionDockerMachine
	I0930 21:08:30.140227   73256 start.go:293] postStartSetup for "embed-certs-256103" (driver="kvm2")
	I0930 21:08:30.140237   73256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:08:30.140252   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.140624   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:08:30.140648   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.143906   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.144300   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.144339   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.144498   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.144695   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.144846   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.145052   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.230069   73256 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:08:30.233845   73256 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:08:30.233868   73256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:08:30.233948   73256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:08:30.234050   73256 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:08:30.234168   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:08:30.243066   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:30.266197   73256 start.go:296] duration metric: took 125.955153ms for postStartSetup
	I0930 21:08:30.266234   73256 fix.go:56] duration metric: took 20.349643145s for fixHost
	I0930 21:08:30.266252   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.269025   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.269405   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.269433   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.269576   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.269784   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.269910   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.270042   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.270176   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:30.270380   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:30.270392   73256 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:08:30.380023   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730510.354607586
	
	I0930 21:08:30.380057   73256 fix.go:216] guest clock: 1727730510.354607586
	I0930 21:08:30.380067   73256 fix.go:229] Guest: 2024-09-30 21:08:30.354607586 +0000 UTC Remote: 2024-09-30 21:08:30.266237543 +0000 UTC m=+355.815232104 (delta=88.370043ms)
	I0930 21:08:30.380085   73256 fix.go:200] guest clock delta is within tolerance: 88.370043ms
	I0930 21:08:30.380091   73256 start.go:83] releasing machines lock for "embed-certs-256103", held for 20.463544222s
	I0930 21:08:30.380113   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.380429   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:30.382992   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.383349   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.383369   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.383518   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384071   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384245   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384310   73256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:08:30.384374   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.384442   73256 ssh_runner.go:195] Run: cat /version.json
	I0930 21:08:30.384464   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.387098   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387342   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387413   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.387435   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387633   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.387762   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.387783   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387828   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.387931   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.388003   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.388058   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.388159   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.388208   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.388347   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.510981   73256 ssh_runner.go:195] Run: systemctl --version
	I0930 21:08:30.517215   73256 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:08:30.663491   73256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:08:30.669568   73256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:08:30.669652   73256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:08:30.686640   73256 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:08:30.686663   73256 start.go:495] detecting cgroup driver to use...
	I0930 21:08:30.686737   73256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:08:30.703718   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:08:30.718743   73256 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:08:30.718807   73256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:08:30.733695   73256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:08:30.748690   73256 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:08:30.878084   73256 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:08:31.040955   73256 docker.go:233] disabling docker service ...
	I0930 21:08:31.041030   73256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:08:31.055212   73256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:08:31.067968   73256 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:08:31.185043   73256 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:08:31.300909   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:08:31.315167   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:08:31.333483   73256 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:08:31.333537   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.343599   73256 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:08:31.343694   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.353739   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.363993   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.375183   73256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:08:31.385478   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.395632   73256 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.412995   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.423277   73256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:08:31.433183   73256 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:08:31.433253   73256 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:08:31.446796   73256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:08:31.456912   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:31.571729   73256 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:08:31.663944   73256 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:08:31.664019   73256 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:08:31.669128   73256 start.go:563] Will wait 60s for crictl version
	I0930 21:08:31.669191   73256 ssh_runner.go:195] Run: which crictl
	I0930 21:08:31.672922   73256 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:08:31.709488   73256 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:08:31.709596   73256 ssh_runner.go:195] Run: crio --version
	I0930 21:08:31.738743   73256 ssh_runner.go:195] Run: crio --version
	I0930 21:08:31.771638   73256 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:08:27.922374   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.921870   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.421786   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.921804   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.421482   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.921969   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.422241   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.922148   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:32.421504   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.773186   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:31.776392   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:31.776770   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:31.776810   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:31.777016   73256 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 21:08:31.781212   73256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:31.793839   73256 kubeadm.go:883] updating cluster {Name:embed-certs-256103 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:08:31.793957   73256 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:08:31.794015   73256 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:31.834036   73256 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:08:31.834094   73256 ssh_runner.go:195] Run: which lz4
	I0930 21:08:31.837877   73256 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:08:31.842038   73256 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:08:31.842073   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 21:08:33.150975   73256 crio.go:462] duration metric: took 1.313131374s to copy over tarball
	I0930 21:08:33.151080   73256 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:08:30.469523   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:32.469562   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:34.969818   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:31.307560   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:33.308130   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:32.921516   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.421576   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.922082   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.421599   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.922178   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.422199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.922061   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.421860   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.921513   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.422162   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.294750   73256 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.143629494s)
	I0930 21:08:35.294785   73256 crio.go:469] duration metric: took 2.143777794s to extract the tarball
	I0930 21:08:35.294794   73256 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:08:35.340151   73256 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:35.385329   73256 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 21:08:35.385359   73256 cache_images.go:84] Images are preloaded, skipping loading
	I0930 21:08:35.385366   73256 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.31.1 crio true true} ...
	I0930 21:08:35.385463   73256 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-256103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:08:35.385536   73256 ssh_runner.go:195] Run: crio config
	I0930 21:08:35.433043   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:08:35.433072   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:35.433084   73256 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:08:35.433113   73256 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-256103 NodeName:embed-certs-256103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:08:35.433277   73256 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-256103"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:08:35.433348   73256 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:08:35.443627   73256 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:08:35.443713   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:08:35.453095   73256 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0930 21:08:35.469517   73256 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:08:35.486869   73256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0930 21:08:35.504871   73256 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0930 21:08:35.508507   73256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:35.521994   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:35.641971   73256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:35.657660   73256 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103 for IP: 192.168.39.90
	I0930 21:08:35.657686   73256 certs.go:194] generating shared ca certs ...
	I0930 21:08:35.657705   73256 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:35.657878   73256 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:08:35.657941   73256 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:08:35.657954   73256 certs.go:256] generating profile certs ...
	I0930 21:08:35.658095   73256 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/client.key
	I0930 21:08:35.658177   73256 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.key.52e83f0c
	I0930 21:08:35.658230   73256 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.key
	I0930 21:08:35.658391   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:08:35.658431   73256 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:08:35.658443   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:08:35.658476   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:08:35.658509   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:08:35.658539   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:08:35.658586   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:35.659279   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:08:35.695254   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:08:35.718948   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:08:35.742442   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:08:35.765859   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0930 21:08:35.792019   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:08:35.822081   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:08:35.845840   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:08:35.871635   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:08:35.896069   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:08:35.921595   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:08:35.946620   73256 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:08:35.963340   73256 ssh_runner.go:195] Run: openssl version
	I0930 21:08:35.970540   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:08:35.982269   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.987494   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.987646   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.994312   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:08:36.006173   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:08:36.017605   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.022126   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.022190   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.027806   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:08:36.038388   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:08:36.048818   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.053230   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.053296   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.058713   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:08:36.070806   73256 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:08:36.075521   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:08:36.081310   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:08:36.086935   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:08:36.092990   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:08:36.098783   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:08:36.104354   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:08:36.110289   73256 kubeadm.go:392] StartCluster: {Name:embed-certs-256103 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:08:36.110411   73256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:08:36.110495   73256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:36.153770   73256 cri.go:89] found id: ""
	I0930 21:08:36.153852   73256 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:08:36.164301   73256 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:08:36.164320   73256 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:08:36.164363   73256 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:08:36.173860   73256 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:08:36.174950   73256 kubeconfig.go:125] found "embed-certs-256103" server: "https://192.168.39.90:8443"
	I0930 21:08:36.177584   73256 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:08:36.186946   73256 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I0930 21:08:36.186984   73256 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:08:36.186998   73256 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:08:36.187045   73256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:36.223259   73256 cri.go:89] found id: ""
	I0930 21:08:36.223328   73256 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:08:36.239321   73256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:08:36.248508   73256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:08:36.248528   73256 kubeadm.go:157] found existing configuration files:
	
	I0930 21:08:36.248571   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:08:36.257483   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:08:36.257537   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:08:36.266792   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:08:36.275626   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:08:36.275697   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:08:36.285000   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:08:36.293923   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:08:36.293977   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:08:36.303990   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:08:36.313104   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:08:36.313158   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:08:36.322423   73256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:08:36.332005   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:36.457666   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.309316   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.533114   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.602999   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.692027   73256 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:08:37.692117   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.192813   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.692777   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.192862   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.469941   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:39.506753   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:35.311295   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:37.806923   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:39.808338   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:37.921497   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.422360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.922305   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.422480   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.922279   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.422089   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.922021   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.421727   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.921519   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:42.422193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.692193   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.192178   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.209649   73256 api_server.go:72] duration metric: took 2.517618424s to wait for apiserver process to appear ...
	I0930 21:08:40.209676   73256 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:08:40.209699   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.034828   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:43.034857   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:43.034871   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.080073   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:43.080107   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:43.210448   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.217768   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:43.217799   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:43.710066   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.722379   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:43.722428   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:44.209939   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:44.219468   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:44.219500   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:44.709767   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:44.714130   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I0930 21:08:44.720194   73256 api_server.go:141] control plane version: v1.31.1
	I0930 21:08:44.720221   73256 api_server.go:131] duration metric: took 4.510539442s to wait for apiserver health ...
	I0930 21:08:44.720230   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:08:44.720236   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:44.721740   73256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:08:41.968377   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:44.469477   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:41.808473   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:43.808575   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:42.922495   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.422250   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.922413   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.421962   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.921682   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.422144   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.922206   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.422020   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.921960   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:47.422296   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.722947   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:08:44.733426   73256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:08:44.750426   73256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:08:44.761259   73256 system_pods.go:59] 8 kube-system pods found
	I0930 21:08:44.761303   73256 system_pods.go:61] "coredns-7c65d6cfc9-h6cl2" [548e3751-edc9-4232-87c2-2e64769ba332] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:08:44.761314   73256 system_pods.go:61] "etcd-embed-certs-256103" [6eef2e96-d4bf-4dd6-bd5c-bfb05c306182] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:08:44.761326   73256 system_pods.go:61] "kube-apiserver-embed-certs-256103" [81c02a52-aca7-4b9c-b7b1-680d27f48d40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:08:44.761335   73256 system_pods.go:61] "kube-controller-manager-embed-certs-256103" [752f0966-7718-4523-8ba6-affd41bc956e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:08:44.761346   73256 system_pods.go:61] "kube-proxy-fqvg2" [284a63a1-d624-4bf3-8509-14ff0845f3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 21:08:44.761354   73256 system_pods.go:61] "kube-scheduler-embed-certs-256103" [6158a51d-82ae-490a-96d3-c0e61a3485f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:08:44.761363   73256 system_pods.go:61] "metrics-server-6867b74b74-hkp9m" [8774a772-bb72-4419-96fd-50ca5f48a5b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:08:44.761374   73256 system_pods.go:61] "storage-provisioner" [9649e71d-cd21-4846-bf66-1c5b469500ba] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 21:08:44.761385   73256 system_pods.go:74] duration metric: took 10.935916ms to wait for pod list to return data ...
	I0930 21:08:44.761397   73256 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:08:44.771745   73256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:08:44.771777   73256 node_conditions.go:123] node cpu capacity is 2
	I0930 21:08:44.771789   73256 node_conditions.go:105] duration metric: took 10.386814ms to run NodePressure ...
	I0930 21:08:44.771810   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:45.064019   73256 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:08:45.070479   73256 kubeadm.go:739] kubelet initialised
	I0930 21:08:45.070508   73256 kubeadm.go:740] duration metric: took 6.461143ms waiting for restarted kubelet to initialise ...
	I0930 21:08:45.070517   73256 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:45.074627   73256 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.080873   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.080897   73256 pod_ready.go:82] duration metric: took 6.244301ms for pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.080906   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.080912   73256 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.086787   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "etcd-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.086818   73256 pod_ready.go:82] duration metric: took 5.898265ms for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.086829   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "etcd-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.086837   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.092860   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.092892   73256 pod_ready.go:82] duration metric: took 6.044766ms for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.092904   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.092912   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.154246   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.154271   73256 pod_ready.go:82] duration metric: took 61.348653ms for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.154281   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.154287   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fqvg2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.554606   73256 pod_ready.go:93] pod "kube-proxy-fqvg2" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:45.554630   73256 pod_ready.go:82] duration metric: took 400.335084ms for pod "kube-proxy-fqvg2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.554639   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:47.559998   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:46.968101   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:48.968649   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:46.307946   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:48.806624   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:47.921903   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.422535   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.921484   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.421909   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.922117   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.421606   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.921728   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.421600   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.921716   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:52.421873   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.561176   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:51.562227   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:54.060692   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:51.467375   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:53.473247   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:50.807821   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:53.307163   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:52.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.921496   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.421866   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.921995   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.421476   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.421660   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.922489   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:57.422291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.562740   73256 pod_ready.go:93] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:54.562765   73256 pod_ready.go:82] duration metric: took 9.008120147s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:54.562775   73256 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:56.570517   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:59.070065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:55.969724   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:58.467585   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:55.807669   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:58.305837   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:57.921737   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.922007   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.422173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.921803   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.421596   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.922123   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.422186   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.921898   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:02.421894   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.070940   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:03.569053   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:00.469160   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.968692   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:00.308195   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.807474   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:04.808710   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.922329   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.421922   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.922360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.421875   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.922544   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.421939   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.921693   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.422056   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.921627   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:07.422125   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.070166   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:08.568945   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:05.467300   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.469409   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:09.968053   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.306237   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:09.306644   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.921687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.421694   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.922234   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.421817   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.921704   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.422030   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.921597   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.421700   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.922301   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:12.421567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.569444   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:13.069582   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:11.970180   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:14.469440   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:11.307287   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:13.307376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:12.922171   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.422423   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.921941   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.422494   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.922454   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.421776   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.922567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.421713   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.922449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:17.421644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.569398   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:18.069177   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:16.968663   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:19.468171   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:15.808689   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:18.307774   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:17.922098   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.922084   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.421717   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.922095   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:19.922178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:19.962975   73900 cri.go:89] found id: ""
	I0930 21:09:19.963002   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.963014   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:19.963020   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:19.963073   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:19.999741   73900 cri.go:89] found id: ""
	I0930 21:09:19.999769   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.999777   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:19.999782   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:19.999840   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:20.035818   73900 cri.go:89] found id: ""
	I0930 21:09:20.035844   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.035856   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:20.035863   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:20.035924   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:20.072005   73900 cri.go:89] found id: ""
	I0930 21:09:20.072032   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.072042   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:20.072048   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:20.072110   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:20.108229   73900 cri.go:89] found id: ""
	I0930 21:09:20.108258   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.108314   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:20.108325   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:20.108383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:20.141331   73900 cri.go:89] found id: ""
	I0930 21:09:20.141388   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.141398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:20.141406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:20.141466   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:20.175133   73900 cri.go:89] found id: ""
	I0930 21:09:20.175161   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.175169   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:20.175175   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:20.175223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:20.210529   73900 cri.go:89] found id: ""
	I0930 21:09:20.210566   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.210578   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:20.210594   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:20.210608   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:20.261055   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:20.261095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:20.274212   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:20.274239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:20.406215   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:20.406246   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:20.406282   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:20.481758   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:20.481794   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:20.069672   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:22.569421   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:21.468616   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:23.468820   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:20.309317   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:22.807149   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:24.807293   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:23.019687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:23.033394   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:23.033450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:23.078558   73900 cri.go:89] found id: ""
	I0930 21:09:23.078592   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.078604   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:23.078611   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:23.078673   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:23.117833   73900 cri.go:89] found id: ""
	I0930 21:09:23.117860   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.117868   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:23.117875   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:23.117931   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:23.157299   73900 cri.go:89] found id: ""
	I0930 21:09:23.157337   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.157359   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:23.157367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:23.157438   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:23.196545   73900 cri.go:89] found id: ""
	I0930 21:09:23.196570   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.196579   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:23.196586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:23.196644   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:23.229359   73900 cri.go:89] found id: ""
	I0930 21:09:23.229390   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.229401   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:23.229409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:23.229471   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:23.264847   73900 cri.go:89] found id: ""
	I0930 21:09:23.264881   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.264893   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:23.264900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:23.264962   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:23.298657   73900 cri.go:89] found id: ""
	I0930 21:09:23.298687   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.298695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:23.298701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:23.298750   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:23.333787   73900 cri.go:89] found id: ""
	I0930 21:09:23.333816   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.333826   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:23.333836   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:23.333851   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:23.386311   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:23.386347   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:23.400096   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:23.400129   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:23.481724   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:23.481748   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:23.481780   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:23.561080   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:23.561119   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.122460   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:26.136409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:26.136495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:26.170785   73900 cri.go:89] found id: ""
	I0930 21:09:26.170818   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.170832   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:26.170866   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:26.170945   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:26.205211   73900 cri.go:89] found id: ""
	I0930 21:09:26.205265   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.205275   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:26.205281   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:26.205335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:26.239242   73900 cri.go:89] found id: ""
	I0930 21:09:26.239276   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.239285   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:26.239291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:26.239337   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:26.272908   73900 cri.go:89] found id: ""
	I0930 21:09:26.272932   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.272940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:26.272946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:26.272993   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:26.311599   73900 cri.go:89] found id: ""
	I0930 21:09:26.311625   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.311632   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:26.311639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:26.311684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:26.345719   73900 cri.go:89] found id: ""
	I0930 21:09:26.345746   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.345754   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:26.345760   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:26.345816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:26.383513   73900 cri.go:89] found id: ""
	I0930 21:09:26.383562   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.383572   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:26.383578   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:26.383637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:26.418533   73900 cri.go:89] found id: ""
	I0930 21:09:26.418565   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.418574   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:26.418584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:26.418594   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.456635   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:26.456660   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:26.507639   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:26.507686   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:26.521069   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:26.521095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:26.594745   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:26.594768   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:26.594781   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:24.569626   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:26.570133   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.069071   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:25.968851   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:27.974091   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:26.808336   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.308328   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.180142   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:29.194730   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:29.194785   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:29.234054   73900 cri.go:89] found id: ""
	I0930 21:09:29.234094   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.234103   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:29.234109   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:29.234156   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:29.280869   73900 cri.go:89] found id: ""
	I0930 21:09:29.280896   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.280907   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:29.280914   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:29.280988   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:29.348376   73900 cri.go:89] found id: ""
	I0930 21:09:29.348406   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.348417   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:29.348424   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:29.348491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:29.404218   73900 cri.go:89] found id: ""
	I0930 21:09:29.404251   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.404261   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:29.404268   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:29.404344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:29.449029   73900 cri.go:89] found id: ""
	I0930 21:09:29.449053   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.449061   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:29.449066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:29.449127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:29.484917   73900 cri.go:89] found id: ""
	I0930 21:09:29.484939   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.484948   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:29.484954   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:29.485002   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:29.517150   73900 cri.go:89] found id: ""
	I0930 21:09:29.517177   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.517185   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:29.517191   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:29.517259   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:29.550410   73900 cri.go:89] found id: ""
	I0930 21:09:29.550443   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.550452   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:29.550461   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:29.550472   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:29.601757   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:29.601803   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:29.616266   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:29.616299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:29.686206   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:29.686228   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:29.686240   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:29.761765   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:29.761810   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:32.299199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:32.315047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:32.315125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:32.349784   73900 cri.go:89] found id: ""
	I0930 21:09:32.349810   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.349819   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:32.349824   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:32.349871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:32.385887   73900 cri.go:89] found id: ""
	I0930 21:09:32.385916   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.385927   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:32.385935   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:32.385994   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:32.421746   73900 cri.go:89] found id: ""
	I0930 21:09:32.421776   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.421789   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:32.421796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:32.421856   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:32.459361   73900 cri.go:89] found id: ""
	I0930 21:09:32.459391   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.459404   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:32.459411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:32.459470   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:32.495919   73900 cri.go:89] found id: ""
	I0930 21:09:32.495947   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.495960   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:32.495966   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:32.496025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:32.533626   73900 cri.go:89] found id: ""
	I0930 21:09:32.533652   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.533663   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:32.533670   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:32.533729   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:32.567577   73900 cri.go:89] found id: ""
	I0930 21:09:32.567610   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.567623   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:32.567630   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:32.567687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:32.604949   73900 cri.go:89] found id: ""
	I0930 21:09:32.604981   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.604991   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:32.605001   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:32.605014   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:32.656781   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:32.656822   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:32.670116   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:32.670144   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:32.736712   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:32.736736   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:32.736751   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:31.070228   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:33.569488   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:30.469162   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:32.469874   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:34.967596   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:31.807682   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:33.807723   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:32.813502   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:32.813556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:35.354372   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:35.369226   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:35.369303   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:35.408374   73900 cri.go:89] found id: ""
	I0930 21:09:35.408402   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.408414   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:35.408421   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:35.408481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:35.442390   73900 cri.go:89] found id: ""
	I0930 21:09:35.442432   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.442440   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:35.442445   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:35.442524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:35.479624   73900 cri.go:89] found id: ""
	I0930 21:09:35.479651   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.479659   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:35.479664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:35.479711   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:35.518580   73900 cri.go:89] found id: ""
	I0930 21:09:35.518609   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.518617   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:35.518623   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:35.518675   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:35.553547   73900 cri.go:89] found id: ""
	I0930 21:09:35.553582   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.553590   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:35.553604   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:35.553669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:35.596444   73900 cri.go:89] found id: ""
	I0930 21:09:35.596476   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.596487   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:35.596495   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:35.596583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:35.634232   73900 cri.go:89] found id: ""
	I0930 21:09:35.634259   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.634268   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:35.634274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:35.634322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:35.669637   73900 cri.go:89] found id: ""
	I0930 21:09:35.669672   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.669683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:35.669694   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:35.669706   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:35.719433   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:35.719469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:35.733383   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:35.733415   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:35.811860   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:35.811887   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:35.811913   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:35.896206   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:35.896272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:35.569694   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:37.570548   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:36.968789   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.968959   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:35.814006   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.306676   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.435999   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:38.450091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:38.450152   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:38.489127   73900 cri.go:89] found id: ""
	I0930 21:09:38.489153   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.489161   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:38.489166   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:38.489221   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:38.520760   73900 cri.go:89] found id: ""
	I0930 21:09:38.520783   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.520792   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:38.520798   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:38.520847   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:38.556279   73900 cri.go:89] found id: ""
	I0930 21:09:38.556306   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.556315   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:38.556319   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:38.556379   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:38.590804   73900 cri.go:89] found id: ""
	I0930 21:09:38.590827   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.590834   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:38.590840   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:38.590906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:38.624765   73900 cri.go:89] found id: ""
	I0930 21:09:38.624792   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.624800   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:38.624805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:38.624857   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:38.660587   73900 cri.go:89] found id: ""
	I0930 21:09:38.660614   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.660625   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:38.660635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:38.660702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:38.693314   73900 cri.go:89] found id: ""
	I0930 21:09:38.693352   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.693362   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:38.693371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:38.693441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:38.729163   73900 cri.go:89] found id: ""
	I0930 21:09:38.729197   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.729212   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:38.729223   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:38.729235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:38.780787   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:38.780828   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:38.794983   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:38.795009   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:38.861886   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:38.861911   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:38.861926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:38.936958   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:38.936994   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:41.479891   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:41.493041   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:41.493106   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:41.528855   73900 cri.go:89] found id: ""
	I0930 21:09:41.528889   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.528900   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:41.528906   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:41.528967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:41.565193   73900 cri.go:89] found id: ""
	I0930 21:09:41.565216   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.565224   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:41.565230   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:41.565289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:41.599503   73900 cri.go:89] found id: ""
	I0930 21:09:41.599538   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.599547   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:41.599553   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:41.599611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:41.636623   73900 cri.go:89] found id: ""
	I0930 21:09:41.636651   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.636663   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:41.636671   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:41.636728   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:41.671727   73900 cri.go:89] found id: ""
	I0930 21:09:41.671753   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.671760   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:41.671765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:41.671819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:41.705499   73900 cri.go:89] found id: ""
	I0930 21:09:41.705533   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.705543   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:41.705549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:41.705602   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:41.738262   73900 cri.go:89] found id: ""
	I0930 21:09:41.738285   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.738292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:41.738297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:41.738351   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:41.774232   73900 cri.go:89] found id: ""
	I0930 21:09:41.774261   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.774269   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:41.774277   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:41.774288   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:41.826060   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:41.826093   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:41.839308   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:41.839335   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:41.908599   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:41.908626   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:41.908640   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:41.986337   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:41.986375   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:40.069900   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:42.070035   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:41.469908   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:43.968111   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:40.307200   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:42.308356   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:44.807663   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:44.527015   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:44.539973   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:44.540036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:44.575985   73900 cri.go:89] found id: ""
	I0930 21:09:44.576012   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.576021   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:44.576027   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:44.576076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:44.612693   73900 cri.go:89] found id: ""
	I0930 21:09:44.612724   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.612736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:44.612743   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:44.612809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:44.646515   73900 cri.go:89] found id: ""
	I0930 21:09:44.646544   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.646555   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:44.646562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:44.646623   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:44.679980   73900 cri.go:89] found id: ""
	I0930 21:09:44.680011   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.680022   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:44.680030   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:44.680089   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:44.714078   73900 cri.go:89] found id: ""
	I0930 21:09:44.714117   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.714128   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:44.714135   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:44.714193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:44.748491   73900 cri.go:89] found id: ""
	I0930 21:09:44.748521   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.748531   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:44.748539   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:44.748618   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:44.780902   73900 cri.go:89] found id: ""
	I0930 21:09:44.780936   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.780947   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:44.780955   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:44.781013   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:44.817944   73900 cri.go:89] found id: ""
	I0930 21:09:44.817999   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.818011   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:44.818022   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:44.818038   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:44.873896   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:44.873926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:44.887829   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:44.887858   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:44.957562   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:44.957584   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:44.957598   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:45.037892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:45.037934   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:47.583013   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:47.595799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:47.595870   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:47.630348   73900 cri.go:89] found id: ""
	I0930 21:09:47.630377   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.630385   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:47.630391   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:47.630444   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:47.663416   73900 cri.go:89] found id: ""
	I0930 21:09:47.663440   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.663448   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:47.663454   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:47.663500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:47.700145   73900 cri.go:89] found id: ""
	I0930 21:09:47.700174   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.700184   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:47.700192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:47.700253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:47.732539   73900 cri.go:89] found id: ""
	I0930 21:09:47.732567   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.732577   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:47.732583   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:47.732637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:44.569951   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:46.570501   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:48.574018   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:45.971063   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:48.468661   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:47.307709   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:49.806843   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:47.764470   73900 cri.go:89] found id: ""
	I0930 21:09:47.764493   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.764501   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:47.764507   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:47.764553   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:47.802365   73900 cri.go:89] found id: ""
	I0930 21:09:47.802393   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.802403   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:47.802411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:47.802468   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:47.836504   73900 cri.go:89] found id: ""
	I0930 21:09:47.836531   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.836542   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:47.836549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:47.836611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:47.870315   73900 cri.go:89] found id: ""
	I0930 21:09:47.870338   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.870351   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:47.870359   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:47.870370   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:47.919974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:47.920011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:47.934157   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:47.934190   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:48.003046   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:48.003072   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:48.003085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:48.084947   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:48.084985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:50.624791   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:50.638118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:50.638196   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:50.672448   73900 cri.go:89] found id: ""
	I0930 21:09:50.672479   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.672488   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:50.672503   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:50.672557   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:50.706057   73900 cri.go:89] found id: ""
	I0930 21:09:50.706080   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.706088   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:50.706093   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:50.706142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:50.738101   73900 cri.go:89] found id: ""
	I0930 21:09:50.738126   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.738134   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:50.738140   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:50.738207   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:50.772483   73900 cri.go:89] found id: ""
	I0930 21:09:50.772508   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.772516   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:50.772522   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:50.772581   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:50.805169   73900 cri.go:89] found id: ""
	I0930 21:09:50.805200   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.805211   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:50.805220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:50.805276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:50.842144   73900 cri.go:89] found id: ""
	I0930 21:09:50.842168   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.842176   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:50.842182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:50.842236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:50.875512   73900 cri.go:89] found id: ""
	I0930 21:09:50.875563   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.875575   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:50.875582   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:50.875643   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:50.909549   73900 cri.go:89] found id: ""
	I0930 21:09:50.909580   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.909591   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:50.909599   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:50.909610   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:50.962064   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:50.962098   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:50.976979   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:50.977012   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:51.053784   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:51.053815   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:51.053833   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:51.130939   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:51.130975   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:51.069919   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:53.568708   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:50.468737   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:52.968935   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:52.306733   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:54.306875   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:53.667675   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:53.680381   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:53.680449   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:53.712759   73900 cri.go:89] found id: ""
	I0930 21:09:53.712791   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.712800   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:53.712807   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:53.712871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:53.748958   73900 cri.go:89] found id: ""
	I0930 21:09:53.748990   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.749002   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:53.749009   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:53.749078   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:53.783243   73900 cri.go:89] found id: ""
	I0930 21:09:53.783272   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.783282   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:53.783289   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:53.783382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:53.823848   73900 cri.go:89] found id: ""
	I0930 21:09:53.823875   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.823883   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:53.823890   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:53.823941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:53.865607   73900 cri.go:89] found id: ""
	I0930 21:09:53.865635   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.865643   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:53.865648   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:53.865693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:53.900888   73900 cri.go:89] found id: ""
	I0930 21:09:53.900912   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.900920   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:53.900926   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:53.900985   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:53.933688   73900 cri.go:89] found id: ""
	I0930 21:09:53.933717   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.933728   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:53.933736   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:53.933798   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:53.968702   73900 cri.go:89] found id: ""
	I0930 21:09:53.968731   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.968740   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:53.968749   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:53.968760   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:54.021588   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:54.021626   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:54.036681   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:54.036719   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:54.112189   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:54.112209   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:54.112223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:54.185028   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:54.185085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:56.725146   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:56.739358   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:56.739421   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:56.779278   73900 cri.go:89] found id: ""
	I0930 21:09:56.779313   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.779322   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:56.779329   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:56.779377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:56.815972   73900 cri.go:89] found id: ""
	I0930 21:09:56.816000   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.816011   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:56.816018   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:56.816084   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:56.849425   73900 cri.go:89] found id: ""
	I0930 21:09:56.849458   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.849471   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:56.849478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:56.849542   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:56.885483   73900 cri.go:89] found id: ""
	I0930 21:09:56.885510   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.885520   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:56.885527   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:56.885586   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:56.917832   73900 cri.go:89] found id: ""
	I0930 21:09:56.917862   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.917872   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:56.917879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:56.917932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:56.951613   73900 cri.go:89] found id: ""
	I0930 21:09:56.951643   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.951654   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:56.951664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:56.951726   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:56.987577   73900 cri.go:89] found id: ""
	I0930 21:09:56.987608   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.987620   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:56.987628   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:56.987691   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:57.024871   73900 cri.go:89] found id: ""
	I0930 21:09:57.024903   73900 logs.go:276] 0 containers: []
	W0930 21:09:57.024912   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:57.024920   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:57.024935   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:57.038279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:57.038309   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:57.111955   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:57.111985   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:57.111998   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:57.193719   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:57.193755   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:57.230058   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:57.230085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:55.568928   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:58.069462   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:55.467583   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:57.968380   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:59.969131   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:56.807753   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:58.808055   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:59.780762   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:59.794210   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:59.794277   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:59.828258   73900 cri.go:89] found id: ""
	I0930 21:09:59.828287   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.828298   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:59.828306   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:59.828369   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:59.868295   73900 cri.go:89] found id: ""
	I0930 21:09:59.868331   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.868353   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:59.868363   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:59.868437   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:59.900298   73900 cri.go:89] found id: ""
	I0930 21:09:59.900326   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.900337   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:59.900343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:59.900403   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:59.934081   73900 cri.go:89] found id: ""
	I0930 21:09:59.934108   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.934120   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:59.934127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:59.934183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:59.970564   73900 cri.go:89] found id: ""
	I0930 21:09:59.970592   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.970600   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:59.970605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:59.970652   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:00.006215   73900 cri.go:89] found id: ""
	I0930 21:10:00.006249   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.006259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:00.006270   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:00.006348   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:00.040106   73900 cri.go:89] found id: ""
	I0930 21:10:00.040135   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.040144   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:00.040150   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:00.040202   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:00.079310   73900 cri.go:89] found id: ""
	I0930 21:10:00.079345   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.079354   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:00.079365   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:00.079378   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:00.161243   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:00.161284   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:00.198911   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:00.198941   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:00.247697   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:00.247735   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:00.260905   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:00.260933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:00.332502   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:00.569218   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.569371   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.468439   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:04.968585   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:00.808753   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:03.306574   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.833204   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:02.846807   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:02.846893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:02.882386   73900 cri.go:89] found id: ""
	I0930 21:10:02.882420   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.882431   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:02.882439   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:02.882504   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:02.918589   73900 cri.go:89] found id: ""
	I0930 21:10:02.918617   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.918633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:02.918642   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:02.918722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:02.952758   73900 cri.go:89] found id: ""
	I0930 21:10:02.952789   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.952799   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:02.952806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:02.952871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:02.991406   73900 cri.go:89] found id: ""
	I0930 21:10:02.991439   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.991448   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:02.991454   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:02.991511   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:03.030075   73900 cri.go:89] found id: ""
	I0930 21:10:03.030104   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.030112   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:03.030121   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:03.030172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:03.063630   73900 cri.go:89] found id: ""
	I0930 21:10:03.063654   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.063662   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:03.063668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:03.063718   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:03.098607   73900 cri.go:89] found id: ""
	I0930 21:10:03.098636   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.098644   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:03.098649   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:03.098702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:03.133161   73900 cri.go:89] found id: ""
	I0930 21:10:03.133189   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.133198   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:03.133206   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:03.133217   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:03.211046   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:03.211083   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:03.252585   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:03.252615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:03.307019   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:03.307049   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:03.320781   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:03.320811   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:03.408645   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:05.909638   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:05.922674   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:05.922744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:05.955264   73900 cri.go:89] found id: ""
	I0930 21:10:05.955305   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.955318   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:05.955326   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:05.955378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:05.991055   73900 cri.go:89] found id: ""
	I0930 21:10:05.991100   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.991122   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:05.991130   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:05.991194   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:06.025725   73900 cri.go:89] found id: ""
	I0930 21:10:06.025755   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.025766   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:06.025773   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:06.025832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:06.067700   73900 cri.go:89] found id: ""
	I0930 21:10:06.067726   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.067736   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:06.067743   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:06.067801   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:06.102729   73900 cri.go:89] found id: ""
	I0930 21:10:06.102760   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.102771   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:06.102784   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:06.102845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:06.137120   73900 cri.go:89] found id: ""
	I0930 21:10:06.137148   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.137159   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:06.137164   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:06.137215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:06.169985   73900 cri.go:89] found id: ""
	I0930 21:10:06.170014   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.170023   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:06.170029   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:06.170082   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:06.206928   73900 cri.go:89] found id: ""
	I0930 21:10:06.206951   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.206959   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:06.206967   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:06.206977   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:06.258835   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:06.258870   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:06.273527   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:06.273556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:06.351335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:06.351359   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:06.351373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:06.423412   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:06.423450   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:04.569756   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:07.069437   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:09.074024   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:06.969500   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:09.471298   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:05.807932   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:08.306749   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:08.968986   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:08.984075   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:08.984139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:09.016815   73900 cri.go:89] found id: ""
	I0930 21:10:09.016847   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.016858   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:09.016864   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:09.016928   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:09.051603   73900 cri.go:89] found id: ""
	I0930 21:10:09.051626   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.051633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:09.051639   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:09.051693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:09.088820   73900 cri.go:89] found id: ""
	I0930 21:10:09.088856   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.088870   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:09.088884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:09.088949   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:09.124032   73900 cri.go:89] found id: ""
	I0930 21:10:09.124064   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.124076   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:09.124083   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:09.124140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:09.177129   73900 cri.go:89] found id: ""
	I0930 21:10:09.177161   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.177172   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:09.177178   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:09.177228   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:09.211490   73900 cri.go:89] found id: ""
	I0930 21:10:09.211513   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.211521   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:09.211540   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:09.211605   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:09.252187   73900 cri.go:89] found id: ""
	I0930 21:10:09.252211   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.252221   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:09.252229   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:09.252289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:09.286970   73900 cri.go:89] found id: ""
	I0930 21:10:09.287004   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.287012   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:09.287020   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:09.287031   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:09.369387   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:09.369410   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:09.369422   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:09.450685   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:09.450733   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:09.491302   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:09.491331   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:09.540183   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:09.540219   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.054793   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:12.068635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:12.068717   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:12.103118   73900 cri.go:89] found id: ""
	I0930 21:10:12.103140   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.103149   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:12.103154   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:12.103219   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:12.137992   73900 cri.go:89] found id: ""
	I0930 21:10:12.138020   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.138031   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:12.138040   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:12.138103   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:12.175559   73900 cri.go:89] found id: ""
	I0930 21:10:12.175591   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.175609   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:12.175616   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:12.175678   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:12.209630   73900 cri.go:89] found id: ""
	I0930 21:10:12.209655   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.209666   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:12.209672   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:12.209735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:12.245844   73900 cri.go:89] found id: ""
	I0930 21:10:12.245879   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.245891   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:12.245901   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:12.245961   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:12.280385   73900 cri.go:89] found id: ""
	I0930 21:10:12.280412   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.280420   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:12.280426   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:12.280484   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:12.315424   73900 cri.go:89] found id: ""
	I0930 21:10:12.315453   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.315463   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:12.315473   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:12.315566   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:12.349223   73900 cri.go:89] found id: ""
	I0930 21:10:12.349251   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.349270   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:12.349279   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:12.349291   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.362360   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:12.362397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:12.432060   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:12.432084   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:12.432101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:12.506059   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:12.506096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:12.541319   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:12.541348   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:11.568740   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:13.569690   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:11.968234   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:13.968634   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:10.306903   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:12.307072   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:14.807562   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:15.098852   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:15.111919   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:15.112001   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:15.149174   73900 cri.go:89] found id: ""
	I0930 21:10:15.149206   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.149216   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:15.149223   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:15.149286   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:15.187283   73900 cri.go:89] found id: ""
	I0930 21:10:15.187316   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.187326   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:15.187333   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:15.187392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:15.223896   73900 cri.go:89] found id: ""
	I0930 21:10:15.223922   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.223933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:15.223940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:15.224000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:15.260530   73900 cri.go:89] found id: ""
	I0930 21:10:15.260559   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.260567   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:15.260573   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:15.260634   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:15.296319   73900 cri.go:89] found id: ""
	I0930 21:10:15.296346   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.296357   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:15.296363   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:15.296425   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:15.333785   73900 cri.go:89] found id: ""
	I0930 21:10:15.333830   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.333843   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:15.333856   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:15.333932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:15.368235   73900 cri.go:89] found id: ""
	I0930 21:10:15.368268   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.368280   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:15.368288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:15.368354   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:15.408155   73900 cri.go:89] found id: ""
	I0930 21:10:15.408184   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.408192   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:15.408200   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:15.408210   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:15.462018   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:15.462058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:15.477345   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:15.477376   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:15.558398   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:15.558423   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:15.558442   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:15.662269   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:15.662311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:15.569988   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.069056   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:16.467859   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.468764   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:17.307469   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:19.809316   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.199477   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:18.213235   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:18.213320   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:18.250379   73900 cri.go:89] found id: ""
	I0930 21:10:18.250409   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.250418   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:18.250424   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:18.250515   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:18.283381   73900 cri.go:89] found id: ""
	I0930 21:10:18.283407   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.283416   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:18.283422   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:18.283482   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:18.321601   73900 cri.go:89] found id: ""
	I0930 21:10:18.321635   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.321646   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:18.321659   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:18.321720   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:18.354210   73900 cri.go:89] found id: ""
	I0930 21:10:18.354242   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.354254   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:18.354262   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:18.354330   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:18.391982   73900 cri.go:89] found id: ""
	I0930 21:10:18.392019   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.392029   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:18.392035   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:18.392150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:18.428826   73900 cri.go:89] found id: ""
	I0930 21:10:18.428851   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.428862   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:18.428870   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:18.428927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:18.465841   73900 cri.go:89] found id: ""
	I0930 21:10:18.465868   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.465878   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:18.465887   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:18.465934   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:18.502747   73900 cri.go:89] found id: ""
	I0930 21:10:18.502775   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.502783   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:18.502793   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:18.502807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:18.558025   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:18.558064   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:18.572356   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:18.572383   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:18.642994   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:18.643020   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:18.643033   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:18.722804   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:18.722845   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.262790   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:21.276427   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:21.276510   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:21.323245   73900 cri.go:89] found id: ""
	I0930 21:10:21.323274   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.323284   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:21.323291   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:21.323377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:21.381684   73900 cri.go:89] found id: ""
	I0930 21:10:21.381725   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.381736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:21.381744   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:21.381813   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:21.428818   73900 cri.go:89] found id: ""
	I0930 21:10:21.428841   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.428849   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:21.428854   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:21.428901   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:21.462906   73900 cri.go:89] found id: ""
	I0930 21:10:21.462935   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.462944   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:21.462949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:21.462995   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:21.502417   73900 cri.go:89] found id: ""
	I0930 21:10:21.502452   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.502464   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:21.502471   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:21.502535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:21.540004   73900 cri.go:89] found id: ""
	I0930 21:10:21.540037   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.540048   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:21.540056   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:21.540105   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:21.574898   73900 cri.go:89] found id: ""
	I0930 21:10:21.574929   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.574937   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:21.574942   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:21.574999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:21.609438   73900 cri.go:89] found id: ""
	I0930 21:10:21.609465   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.609473   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:21.609496   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:21.609524   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.646651   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:21.646679   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:21.702406   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:21.702451   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:21.716226   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:21.716260   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:21.790089   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:21.790115   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:21.790128   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:20.070823   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.568856   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:20.968069   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.968208   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.307376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:24.808780   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:24.368291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:24.381517   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:24.381588   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:24.416535   73900 cri.go:89] found id: ""
	I0930 21:10:24.416559   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.416570   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:24.416577   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:24.416635   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:24.454444   73900 cri.go:89] found id: ""
	I0930 21:10:24.454472   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.454480   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:24.454485   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:24.454537   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:24.492334   73900 cri.go:89] found id: ""
	I0930 21:10:24.492359   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.492367   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:24.492373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:24.492419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:24.527590   73900 cri.go:89] found id: ""
	I0930 21:10:24.527622   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.527633   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:24.527642   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:24.527708   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:24.564819   73900 cri.go:89] found id: ""
	I0930 21:10:24.564844   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.564853   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:24.564858   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:24.564915   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:24.599367   73900 cri.go:89] found id: ""
	I0930 21:10:24.599390   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.599398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:24.599403   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:24.599450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:24.636738   73900 cri.go:89] found id: ""
	I0930 21:10:24.636767   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.636778   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:24.636785   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:24.636845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:24.669607   73900 cri.go:89] found id: ""
	I0930 21:10:24.669640   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.669651   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:24.669663   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:24.669680   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:24.722662   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:24.722696   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:24.736150   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:24.736179   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:24.812022   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:24.812053   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:24.812069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.891291   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:24.891330   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.430595   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:27.443990   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:27.444054   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:27.480204   73900 cri.go:89] found id: ""
	I0930 21:10:27.480230   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.480237   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:27.480243   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:27.480297   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:27.516959   73900 cri.go:89] found id: ""
	I0930 21:10:27.516982   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.516989   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:27.516995   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:27.517041   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:27.549717   73900 cri.go:89] found id: ""
	I0930 21:10:27.549745   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.549758   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:27.549769   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:27.549821   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:27.584512   73900 cri.go:89] found id: ""
	I0930 21:10:27.584539   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.584549   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:27.584560   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:27.584619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:27.623551   73900 cri.go:89] found id: ""
	I0930 21:10:27.623586   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.623603   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:27.623612   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:27.623679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:27.662453   73900 cri.go:89] found id: ""
	I0930 21:10:27.662478   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.662486   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:27.662493   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:27.662554   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:27.695665   73900 cri.go:89] found id: ""
	I0930 21:10:27.695693   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.695701   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:27.695707   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:27.695765   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:27.729090   73900 cri.go:89] found id: ""
	I0930 21:10:27.729129   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.729137   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:27.729146   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:27.729155   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.570129   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:26.572751   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.069340   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:25.468598   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.469443   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.970417   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.307766   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.806538   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.816186   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:27.816230   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.854451   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:27.854485   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:27.905674   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:27.905709   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:27.918889   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:27.918917   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:27.989739   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.490514   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:30.502735   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:30.502810   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:30.535874   73900 cri.go:89] found id: ""
	I0930 21:10:30.535902   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.535914   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:30.535922   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:30.535989   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:30.570603   73900 cri.go:89] found id: ""
	I0930 21:10:30.570627   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.570634   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:30.570643   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:30.570689   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:30.605225   73900 cri.go:89] found id: ""
	I0930 21:10:30.605255   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.605266   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:30.605273   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:30.605333   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:30.640810   73900 cri.go:89] found id: ""
	I0930 21:10:30.640839   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.640849   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:30.640857   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:30.640914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:30.673101   73900 cri.go:89] found id: ""
	I0930 21:10:30.673129   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.673137   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:30.673142   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:30.673189   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:30.704332   73900 cri.go:89] found id: ""
	I0930 21:10:30.704356   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.704366   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:30.704373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:30.704440   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:30.738463   73900 cri.go:89] found id: ""
	I0930 21:10:30.738494   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.738506   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:30.738516   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:30.738579   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:30.772115   73900 cri.go:89] found id: ""
	I0930 21:10:30.772153   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.772164   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:30.772175   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:30.772193   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:30.850683   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.850707   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:30.850720   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:30.930674   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:30.930718   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:30.975781   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:30.975819   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:31.030566   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:31.030613   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:31.070216   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.568935   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:32.468224   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:34.968557   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:31.807408   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.807669   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.544354   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:33.557613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:33.557692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:33.594372   73900 cri.go:89] found id: ""
	I0930 21:10:33.594394   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.594401   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:33.594406   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:33.594455   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:33.632026   73900 cri.go:89] found id: ""
	I0930 21:10:33.632048   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.632056   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:33.632061   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:33.632113   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:33.666168   73900 cri.go:89] found id: ""
	I0930 21:10:33.666201   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.666213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:33.666219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:33.666269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:33.697772   73900 cri.go:89] found id: ""
	I0930 21:10:33.697801   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.697810   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:33.697816   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:33.697864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:33.732821   73900 cri.go:89] found id: ""
	I0930 21:10:33.732851   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.732862   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:33.732869   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:33.732952   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:33.770646   73900 cri.go:89] found id: ""
	I0930 21:10:33.770682   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.770693   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:33.770701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:33.770756   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:33.804803   73900 cri.go:89] found id: ""
	I0930 21:10:33.804831   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.804842   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:33.804848   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:33.804921   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:33.838455   73900 cri.go:89] found id: ""
	I0930 21:10:33.838484   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.838495   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:33.838505   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:33.838523   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:33.879785   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:33.879812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:33.934586   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:33.934623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:33.948250   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:33.948293   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:34.023021   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:34.023054   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:34.023069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:36.604173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:36.616668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:36.616735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:36.650716   73900 cri.go:89] found id: ""
	I0930 21:10:36.650748   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.650757   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:36.650767   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:36.650833   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:36.685705   73900 cri.go:89] found id: ""
	I0930 21:10:36.685739   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.685751   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:36.685758   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:36.685819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:36.719895   73900 cri.go:89] found id: ""
	I0930 21:10:36.719922   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.719932   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:36.719939   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:36.720006   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:36.753123   73900 cri.go:89] found id: ""
	I0930 21:10:36.753148   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.753159   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:36.753166   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:36.753231   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:36.790023   73900 cri.go:89] found id: ""
	I0930 21:10:36.790054   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.790066   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:36.790073   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:36.790135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:36.825280   73900 cri.go:89] found id: ""
	I0930 21:10:36.825314   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.825324   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:36.825343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:36.825411   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:36.859028   73900 cri.go:89] found id: ""
	I0930 21:10:36.859053   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.859060   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:36.859066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:36.859125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:36.894952   73900 cri.go:89] found id: ""
	I0930 21:10:36.894980   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.894988   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:36.894996   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:36.895010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:36.968214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:36.968241   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:36.968256   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:37.047866   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:37.047903   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:37.088671   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:37.088705   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:37.144014   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:37.144058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:36.068920   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:38.069544   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:36.969475   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:39.469207   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:35.808654   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:38.306701   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:39.657874   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:39.671042   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:39.671100   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:39.706210   73900 cri.go:89] found id: ""
	I0930 21:10:39.706235   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.706243   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:39.706248   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:39.706295   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:39.743194   73900 cri.go:89] found id: ""
	I0930 21:10:39.743218   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.743226   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:39.743232   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:39.743280   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:39.780681   73900 cri.go:89] found id: ""
	I0930 21:10:39.780707   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.780715   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:39.780720   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:39.780774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:39.815841   73900 cri.go:89] found id: ""
	I0930 21:10:39.815865   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.815874   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:39.815879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:39.815933   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:39.849497   73900 cri.go:89] found id: ""
	I0930 21:10:39.849523   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.849534   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:39.849541   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:39.849603   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:39.883476   73900 cri.go:89] found id: ""
	I0930 21:10:39.883507   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.883519   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:39.883562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:39.883633   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:39.918300   73900 cri.go:89] found id: ""
	I0930 21:10:39.918329   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.918338   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:39.918343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:39.918392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:39.955751   73900 cri.go:89] found id: ""
	I0930 21:10:39.955780   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.955788   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:39.955795   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:39.955807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:40.010994   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:40.011035   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:40.025992   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:40.026022   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:40.097709   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:40.097731   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:40.097748   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:40.176790   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:40.176824   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:42.713838   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:42.729806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:42.729885   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:40.070503   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.568444   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:41.968357   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:44.469223   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:40.308072   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.807489   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.765449   73900 cri.go:89] found id: ""
	I0930 21:10:42.765483   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.765491   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:42.765498   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:42.765555   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:42.802556   73900 cri.go:89] found id: ""
	I0930 21:10:42.802584   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.802604   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:42.802612   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:42.802693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:42.836537   73900 cri.go:89] found id: ""
	I0930 21:10:42.836568   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.836585   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:42.836598   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:42.836662   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:42.870475   73900 cri.go:89] found id: ""
	I0930 21:10:42.870503   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.870511   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:42.870526   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:42.870589   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:42.907061   73900 cri.go:89] found id: ""
	I0930 21:10:42.907090   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.907098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:42.907103   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:42.907153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:42.941607   73900 cri.go:89] found id: ""
	I0930 21:10:42.941632   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.941640   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:42.941646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:42.941701   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:42.977073   73900 cri.go:89] found id: ""
	I0930 21:10:42.977097   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.977105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:42.977111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:42.977159   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:43.010838   73900 cri.go:89] found id: ""
	I0930 21:10:43.010859   73900 logs.go:276] 0 containers: []
	W0930 21:10:43.010867   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:43.010875   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:43.010886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:43.061264   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:43.061299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:43.075917   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:43.075950   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:43.137088   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:43.137111   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:43.137126   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:43.219393   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:43.219440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:45.761752   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:45.775864   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:45.775942   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:45.810693   73900 cri.go:89] found id: ""
	I0930 21:10:45.810724   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.810734   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:45.810740   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:45.810797   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:45.848360   73900 cri.go:89] found id: ""
	I0930 21:10:45.848399   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.848410   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:45.848418   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:45.848475   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:45.885504   73900 cri.go:89] found id: ""
	I0930 21:10:45.885550   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.885560   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:45.885565   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:45.885616   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:45.919747   73900 cri.go:89] found id: ""
	I0930 21:10:45.919776   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.919784   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:45.919789   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:45.919843   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:45.953787   73900 cri.go:89] found id: ""
	I0930 21:10:45.953820   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.953831   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:45.953839   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:45.953893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:45.990145   73900 cri.go:89] found id: ""
	I0930 21:10:45.990174   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.990184   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:45.990192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:45.990253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:46.023359   73900 cri.go:89] found id: ""
	I0930 21:10:46.023383   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.023391   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:46.023396   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:46.023447   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:46.057460   73900 cri.go:89] found id: ""
	I0930 21:10:46.057493   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.057504   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:46.057514   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:46.057533   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:46.097082   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:46.097109   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:46.147921   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:46.147960   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:46.161204   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:46.161232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:46.224308   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:46.224336   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:46.224351   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:44.568918   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:46.569353   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.569656   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:46.967674   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.967998   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:45.306917   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:47.806333   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:49.807846   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.805668   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:48.818569   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:48.818663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:48.856783   73900 cri.go:89] found id: ""
	I0930 21:10:48.856815   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.856827   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:48.856834   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:48.856896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:48.889185   73900 cri.go:89] found id: ""
	I0930 21:10:48.889217   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.889229   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:48.889236   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:48.889306   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:48.922013   73900 cri.go:89] found id: ""
	I0930 21:10:48.922041   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.922050   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:48.922055   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:48.922107   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:48.956818   73900 cri.go:89] found id: ""
	I0930 21:10:48.956848   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.956858   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:48.956866   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:48.956929   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:48.994942   73900 cri.go:89] found id: ""
	I0930 21:10:48.994975   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.994985   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:48.994991   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:48.995052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:49.031448   73900 cri.go:89] found id: ""
	I0930 21:10:49.031479   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.031491   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:49.031500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:49.031583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:49.066570   73900 cri.go:89] found id: ""
	I0930 21:10:49.066600   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.066608   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:49.066613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:49.066658   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:49.100952   73900 cri.go:89] found id: ""
	I0930 21:10:49.100981   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.100992   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:49.101000   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:49.101010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:49.176423   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:49.176458   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:49.212358   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:49.212387   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:49.263177   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:49.263227   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:49.275940   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:49.275969   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:49.346915   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:51.847761   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:51.860571   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:51.860646   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:51.894863   73900 cri.go:89] found id: ""
	I0930 21:10:51.894896   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.894906   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:51.894914   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:51.894978   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:51.927977   73900 cri.go:89] found id: ""
	I0930 21:10:51.928007   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.928018   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:51.928025   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:51.928083   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:51.962894   73900 cri.go:89] found id: ""
	I0930 21:10:51.962924   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.962933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:51.962940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:51.962999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:51.998453   73900 cri.go:89] found id: ""
	I0930 21:10:51.998482   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.998493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:51.998500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:51.998562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:52.033039   73900 cri.go:89] found id: ""
	I0930 21:10:52.033066   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.033075   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:52.033080   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:52.033139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:52.067222   73900 cri.go:89] found id: ""
	I0930 21:10:52.067254   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.067267   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:52.067274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:52.067341   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:52.102414   73900 cri.go:89] found id: ""
	I0930 21:10:52.102439   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.102448   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:52.102453   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:52.102498   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:52.135175   73900 cri.go:89] found id: ""
	I0930 21:10:52.135204   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.135214   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:52.135225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:52.135239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:52.185736   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:52.185779   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:52.198756   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:52.198792   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:52.264816   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:52.264847   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:52.264859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:52.347189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:52.347229   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:50.569765   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:53.068745   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:50.968885   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:52.970855   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:52.307245   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:54.308516   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:54.887502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:54.900067   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:54.900153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:54.939214   73900 cri.go:89] found id: ""
	I0930 21:10:54.939241   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.939249   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:54.939259   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:54.939313   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:54.973451   73900 cri.go:89] found id: ""
	I0930 21:10:54.973475   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.973483   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:54.973488   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:54.973541   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:55.007815   73900 cri.go:89] found id: ""
	I0930 21:10:55.007841   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.007850   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:55.007855   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:55.007914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:55.040861   73900 cri.go:89] found id: ""
	I0930 21:10:55.040891   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.040899   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:55.040905   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:55.040957   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:55.076053   73900 cri.go:89] found id: ""
	I0930 21:10:55.076086   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.076098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:55.076111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:55.076172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:55.108768   73900 cri.go:89] found id: ""
	I0930 21:10:55.108797   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.108807   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:55.108814   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:55.108879   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:55.155283   73900 cri.go:89] found id: ""
	I0930 21:10:55.155316   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.155331   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:55.155338   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:55.155398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:55.189370   73900 cri.go:89] found id: ""
	I0930 21:10:55.189399   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.189408   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:55.189416   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:55.189432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:55.243067   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:55.243101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:55.257021   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:55.257051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:55.329381   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:55.329408   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:55.329423   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:55.405691   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:55.405762   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:55.069901   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.568914   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:55.468489   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.977733   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:56.806381   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:58.806880   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.957380   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:57.971160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:57.971245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:58.004401   73900 cri.go:89] found id: ""
	I0930 21:10:58.004446   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.004457   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:58.004465   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:58.004524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:58.038954   73900 cri.go:89] found id: ""
	I0930 21:10:58.038978   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.038986   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:58.038991   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:58.039036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:58.072801   73900 cri.go:89] found id: ""
	I0930 21:10:58.072830   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.072842   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:58.072849   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:58.072909   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:58.104908   73900 cri.go:89] found id: ""
	I0930 21:10:58.104936   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.104946   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:58.104953   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:58.105014   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:58.139693   73900 cri.go:89] found id: ""
	I0930 21:10:58.139725   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.139735   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:58.139741   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:58.139795   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:58.174149   73900 cri.go:89] found id: ""
	I0930 21:10:58.174180   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.174192   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:58.174199   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:58.174275   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:58.206067   73900 cri.go:89] found id: ""
	I0930 21:10:58.206094   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.206105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:58.206112   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:58.206167   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:58.240613   73900 cri.go:89] found id: ""
	I0930 21:10:58.240645   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.240653   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:58.240661   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:58.240674   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:58.306061   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:58.306086   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:58.306100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:58.386030   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:58.386073   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:58.425526   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:58.425562   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:58.483364   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:58.483409   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:00.998086   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:01.011934   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:01.012015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:01.047923   73900 cri.go:89] found id: ""
	I0930 21:11:01.047951   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.047960   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:01.047966   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:01.048024   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:01.082126   73900 cri.go:89] found id: ""
	I0930 21:11:01.082159   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.082170   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:01.082176   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:01.082224   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:01.117746   73900 cri.go:89] found id: ""
	I0930 21:11:01.117775   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.117787   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:01.117794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:01.117853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:01.153034   73900 cri.go:89] found id: ""
	I0930 21:11:01.153059   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.153067   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:01.153072   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:01.153128   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:01.188102   73900 cri.go:89] found id: ""
	I0930 21:11:01.188125   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.188133   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:01.188139   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:01.188193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:01.222120   73900 cri.go:89] found id: ""
	I0930 21:11:01.222147   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.222155   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:01.222161   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:01.222215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:01.258899   73900 cri.go:89] found id: ""
	I0930 21:11:01.258929   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.258941   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:01.258949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:01.259008   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:01.295473   73900 cri.go:89] found id: ""
	I0930 21:11:01.295504   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.295512   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:01.295521   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:01.295551   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:01.349134   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:01.349181   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:01.363113   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:01.363147   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:01.436589   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:01.436609   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:01.436622   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:01.516384   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:01.516420   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:00.069406   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:02.568203   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:00.468104   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:02.968911   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:00.807318   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:03.307184   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:04.075114   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:04.089300   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:04.089375   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:04.124385   73900 cri.go:89] found id: ""
	I0930 21:11:04.124411   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.124419   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:04.124425   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:04.124491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:04.158326   73900 cri.go:89] found id: ""
	I0930 21:11:04.158359   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.158367   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:04.158372   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:04.158419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:04.193477   73900 cri.go:89] found id: ""
	I0930 21:11:04.193507   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.193516   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:04.193521   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:04.193577   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:04.231697   73900 cri.go:89] found id: ""
	I0930 21:11:04.231723   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.231731   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:04.231737   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:04.231805   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:04.265879   73900 cri.go:89] found id: ""
	I0930 21:11:04.265903   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.265910   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:04.265915   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:04.265960   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:04.301382   73900 cri.go:89] found id: ""
	I0930 21:11:04.301421   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.301432   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:04.301440   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:04.301505   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:04.337496   73900 cri.go:89] found id: ""
	I0930 21:11:04.337521   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.337529   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:04.337534   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:04.337584   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:04.372631   73900 cri.go:89] found id: ""
	I0930 21:11:04.372665   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.372677   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:04.372700   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:04.372715   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:04.385279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:04.385311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:04.456700   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:04.456721   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:04.456732   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:04.537892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:04.537933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.574919   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:04.574947   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.128733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:07.142625   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:07.142687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:07.177450   73900 cri.go:89] found id: ""
	I0930 21:11:07.177475   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.177483   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:07.177488   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:07.177536   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:07.210158   73900 cri.go:89] found id: ""
	I0930 21:11:07.210184   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.210192   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:07.210197   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:07.210256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:07.242623   73900 cri.go:89] found id: ""
	I0930 21:11:07.242648   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.242656   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:07.242661   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:07.242705   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:07.277779   73900 cri.go:89] found id: ""
	I0930 21:11:07.277810   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.277821   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:07.277827   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:07.277881   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:07.316232   73900 cri.go:89] found id: ""
	I0930 21:11:07.316257   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.316263   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:07.316269   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:07.316326   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:07.360277   73900 cri.go:89] found id: ""
	I0930 21:11:07.360311   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.360322   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:07.360329   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:07.360391   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:07.412146   73900 cri.go:89] found id: ""
	I0930 21:11:07.412171   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.412181   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:07.412187   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:07.412247   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:07.447179   73900 cri.go:89] found id: ""
	I0930 21:11:07.447209   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.447217   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:07.447225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:07.447235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.496304   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:07.496340   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:07.510332   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:07.510373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:07.581335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:07.581375   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:07.581393   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:07.664522   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:07.664558   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.568787   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.069201   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:09.070583   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:05.468251   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.970913   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:05.308084   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.807712   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.201145   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:10.213605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:10.213663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:10.247875   73900 cri.go:89] found id: ""
	I0930 21:11:10.247904   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.247913   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:10.247918   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:10.247966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:10.280855   73900 cri.go:89] found id: ""
	I0930 21:11:10.280889   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.280900   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:10.280907   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:10.280967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:10.315638   73900 cri.go:89] found id: ""
	I0930 21:11:10.315661   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.315669   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:10.315675   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:10.315722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:10.357059   73900 cri.go:89] found id: ""
	I0930 21:11:10.357086   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.357094   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:10.357100   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:10.357154   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:10.389969   73900 cri.go:89] found id: ""
	I0930 21:11:10.389997   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.390004   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:10.390009   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:10.390060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:10.424424   73900 cri.go:89] found id: ""
	I0930 21:11:10.424454   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.424463   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:10.424469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:10.424533   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:10.457608   73900 cri.go:89] found id: ""
	I0930 21:11:10.457638   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.457650   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:10.457657   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:10.457712   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:10.490215   73900 cri.go:89] found id: ""
	I0930 21:11:10.490244   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.490253   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:10.490263   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:10.490278   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:10.554787   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:10.554814   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:10.554829   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:10.632428   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:10.632464   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:10.671018   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:10.671054   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:10.721187   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:10.721228   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:11.568643   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:13.568765   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.469296   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:12.968274   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.307487   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:12.307960   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:14.808087   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:13.234687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:13.250680   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:13.250778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:13.312468   73900 cri.go:89] found id: ""
	I0930 21:11:13.312499   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.312509   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:13.312516   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:13.312578   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:13.367051   73900 cri.go:89] found id: ""
	I0930 21:11:13.367073   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.367084   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:13.367091   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:13.367149   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:13.403019   73900 cri.go:89] found id: ""
	I0930 21:11:13.403055   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.403066   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:13.403074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:13.403135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:13.436942   73900 cri.go:89] found id: ""
	I0930 21:11:13.436967   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.436975   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:13.436981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:13.437047   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:13.470491   73900 cri.go:89] found id: ""
	I0930 21:11:13.470515   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.470523   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:13.470528   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:13.470619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:13.504078   73900 cri.go:89] found id: ""
	I0930 21:11:13.504112   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.504121   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:13.504127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:13.504201   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:13.536245   73900 cri.go:89] found id: ""
	I0930 21:11:13.536271   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.536292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:13.536297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:13.536357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:13.570794   73900 cri.go:89] found id: ""
	I0930 21:11:13.570817   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.570827   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:13.570836   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:13.570850   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:13.647919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:13.647941   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:13.647956   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:13.726113   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:13.726150   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:13.767916   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:13.767942   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:13.826362   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:13.826402   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.341252   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:16.354259   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:16.354344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:16.388627   73900 cri.go:89] found id: ""
	I0930 21:11:16.388650   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.388658   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:16.388663   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:16.388714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:16.424848   73900 cri.go:89] found id: ""
	I0930 21:11:16.424871   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.424878   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:16.424883   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:16.424941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:16.460604   73900 cri.go:89] found id: ""
	I0930 21:11:16.460626   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.460635   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:16.460640   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:16.460688   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:16.495908   73900 cri.go:89] found id: ""
	I0930 21:11:16.495932   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.495940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:16.495946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:16.496000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:16.531758   73900 cri.go:89] found id: ""
	I0930 21:11:16.531782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.531790   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:16.531796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:16.531853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:16.566756   73900 cri.go:89] found id: ""
	I0930 21:11:16.566782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.566792   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:16.566799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:16.566864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:16.601978   73900 cri.go:89] found id: ""
	I0930 21:11:16.602005   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.602012   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:16.602022   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:16.602081   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:16.636009   73900 cri.go:89] found id: ""
	I0930 21:11:16.636044   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.636056   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:16.636066   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:16.636079   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:16.688750   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:16.688786   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.702364   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:16.702404   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:16.767119   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:16.767175   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:16.767188   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:16.842052   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:16.842095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:15.571440   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:18.068441   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:15.469030   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:17.970779   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:17.307424   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:19.807193   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:19.380570   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:19.394687   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:19.394816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:19.427087   73900 cri.go:89] found id: ""
	I0930 21:11:19.427116   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.427124   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:19.427129   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:19.427178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:19.461074   73900 cri.go:89] found id: ""
	I0930 21:11:19.461098   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.461108   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:19.461122   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:19.461183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:19.494850   73900 cri.go:89] found id: ""
	I0930 21:11:19.494872   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.494880   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:19.494885   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:19.494943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:19.533448   73900 cri.go:89] found id: ""
	I0930 21:11:19.533480   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.533493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:19.533500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:19.533562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:19.569250   73900 cri.go:89] found id: ""
	I0930 21:11:19.569280   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.569291   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:19.569298   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:19.569383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:19.603182   73900 cri.go:89] found id: ""
	I0930 21:11:19.603206   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.603213   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:19.603219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:19.603268   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:19.637411   73900 cri.go:89] found id: ""
	I0930 21:11:19.637433   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.637441   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:19.637447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:19.637500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:19.672789   73900 cri.go:89] found id: ""
	I0930 21:11:19.672821   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.672831   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:19.672841   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:19.672854   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:19.755002   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:19.755039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:19.796499   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:19.796536   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:19.847235   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:19.847272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:19.861007   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:19.861032   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:19.931214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.431506   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:22.446129   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:22.446199   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:22.484093   73900 cri.go:89] found id: ""
	I0930 21:11:22.484119   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.484126   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:22.484132   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:22.484183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:22.516949   73900 cri.go:89] found id: ""
	I0930 21:11:22.516986   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.516994   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:22.517001   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:22.517056   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:22.550848   73900 cri.go:89] found id: ""
	I0930 21:11:22.550883   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.550898   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:22.550906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:22.550966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:22.586459   73900 cri.go:89] found id: ""
	I0930 21:11:22.586490   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.586498   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:22.586505   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:22.586627   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:22.620538   73900 cri.go:89] found id: ""
	I0930 21:11:22.620566   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.620578   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:22.620586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:22.620651   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:22.658256   73900 cri.go:89] found id: ""
	I0930 21:11:22.658279   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.658287   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:22.658292   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:22.658352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:22.690316   73900 cri.go:89] found id: ""
	I0930 21:11:22.690349   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.690365   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:22.690371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:22.690431   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:22.724234   73900 cri.go:89] found id: ""
	I0930 21:11:22.724264   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.724275   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:22.724285   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:22.724299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:20.570198   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:23.072974   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:20.468122   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.968686   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.307398   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:24.806972   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.777460   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:22.777503   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:22.790850   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:22.790879   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:22.866058   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.866079   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:22.866095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:22.947447   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:22.947488   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:25.486733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:25.499906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:25.499976   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:25.533819   73900 cri.go:89] found id: ""
	I0930 21:11:25.533842   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.533850   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:25.533857   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:25.533906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:25.568037   73900 cri.go:89] found id: ""
	I0930 21:11:25.568059   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.568066   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:25.568071   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:25.568129   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:25.601784   73900 cri.go:89] found id: ""
	I0930 21:11:25.601811   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.601819   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:25.601824   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:25.601876   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:25.638048   73900 cri.go:89] found id: ""
	I0930 21:11:25.638070   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.638078   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:25.638084   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:25.638140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:25.669946   73900 cri.go:89] found id: ""
	I0930 21:11:25.669968   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.669976   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:25.669981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:25.670028   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:25.701928   73900 cri.go:89] found id: ""
	I0930 21:11:25.701953   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.701961   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:25.701967   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:25.702025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:25.744295   73900 cri.go:89] found id: ""
	I0930 21:11:25.744327   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.744335   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:25.744341   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:25.744398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:25.780175   73900 cri.go:89] found id: ""
	I0930 21:11:25.780205   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.780213   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:25.780221   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:25.780232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:25.828774   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:25.828812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:25.842624   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:25.842649   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:25.916408   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:25.916451   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:25.916469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:25.997896   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:25.997932   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:25.570148   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:28.068628   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:25.467356   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:27.467782   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:29.467936   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:27.306939   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:29.807156   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:28.540994   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:28.553841   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:28.553904   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:28.588718   73900 cri.go:89] found id: ""
	I0930 21:11:28.588745   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.588754   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:28.588763   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:28.588809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:28.636210   73900 cri.go:89] found id: ""
	I0930 21:11:28.636237   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.636245   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:28.636250   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:28.636312   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:28.668714   73900 cri.go:89] found id: ""
	I0930 21:11:28.668743   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.668751   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:28.668757   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:28.668804   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:28.700413   73900 cri.go:89] found id: ""
	I0930 21:11:28.700449   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.700462   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:28.700469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:28.700522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:28.733409   73900 cri.go:89] found id: ""
	I0930 21:11:28.733433   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.733441   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:28.733446   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:28.733494   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:28.766917   73900 cri.go:89] found id: ""
	I0930 21:11:28.766957   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.766970   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:28.766979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:28.767046   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:28.801759   73900 cri.go:89] found id: ""
	I0930 21:11:28.801788   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.801798   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:28.801805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:28.801851   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:28.840724   73900 cri.go:89] found id: ""
	I0930 21:11:28.840761   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.840770   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:28.840790   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:28.840805   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:28.854426   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:28.854465   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:28.926650   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:28.926675   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:28.926690   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:29.005513   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:29.005569   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:29.047077   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:29.047102   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.603193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:31.615563   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:31.615631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:31.647656   73900 cri.go:89] found id: ""
	I0930 21:11:31.647685   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.647693   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:31.647699   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:31.647748   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:31.680004   73900 cri.go:89] found id: ""
	I0930 21:11:31.680037   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.680048   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:31.680056   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:31.680120   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:31.712562   73900 cri.go:89] found id: ""
	I0930 21:11:31.712588   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.712596   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:31.712602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:31.712650   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:31.747692   73900 cri.go:89] found id: ""
	I0930 21:11:31.747724   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.747732   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:31.747738   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:31.747803   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:31.781441   73900 cri.go:89] found id: ""
	I0930 21:11:31.781464   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.781472   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:31.781478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:31.781532   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:31.822227   73900 cri.go:89] found id: ""
	I0930 21:11:31.822252   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.822259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:31.822265   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:31.822322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:31.856531   73900 cri.go:89] found id: ""
	I0930 21:11:31.856555   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.856563   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:31.856568   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:31.856631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:31.894562   73900 cri.go:89] found id: ""
	I0930 21:11:31.894585   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.894593   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:31.894602   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:31.894618   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.946233   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:31.946271   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:31.960713   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:31.960744   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:32.036479   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:32.036497   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:32.036509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:32.111442   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:32.111477   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:30.068975   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:32.069794   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:31.468374   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:33.468986   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:31.809169   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:34.307372   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:34.651545   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:34.664058   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:34.664121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:34.697506   73900 cri.go:89] found id: ""
	I0930 21:11:34.697530   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.697539   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:34.697545   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:34.697599   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:34.730297   73900 cri.go:89] found id: ""
	I0930 21:11:34.730326   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.730334   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:34.730339   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:34.730390   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:34.762251   73900 cri.go:89] found id: ""
	I0930 21:11:34.762278   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.762286   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:34.762291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:34.762358   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:34.803028   73900 cri.go:89] found id: ""
	I0930 21:11:34.803058   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.803068   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:34.803074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:34.803122   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:34.840063   73900 cri.go:89] found id: ""
	I0930 21:11:34.840097   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.840110   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:34.840118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:34.840192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:34.878641   73900 cri.go:89] found id: ""
	I0930 21:11:34.878675   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.878686   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:34.878693   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:34.878745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:34.910799   73900 cri.go:89] found id: ""
	I0930 21:11:34.910823   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.910830   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:34.910837   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:34.910899   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:34.947748   73900 cri.go:89] found id: ""
	I0930 21:11:34.947782   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.947795   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:34.947806   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:34.947821   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:35.026490   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:35.026514   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:35.026529   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:35.115504   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:35.115559   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:35.158629   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:35.158659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:35.211011   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:35.211052   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:37.726260   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:37.739137   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:37.739222   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:34.568166   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:36.569720   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:39.069371   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:35.968574   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:38.467872   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:36.807057   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:38.807376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:37.779980   73900 cri.go:89] found id: ""
	I0930 21:11:37.780009   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.780018   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:37.780024   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:37.780076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:37.813936   73900 cri.go:89] found id: ""
	I0930 21:11:37.813961   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.813969   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:37.813975   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:37.814021   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:37.851150   73900 cri.go:89] found id: ""
	I0930 21:11:37.851176   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.851186   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:37.851193   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:37.851256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:37.891855   73900 cri.go:89] found id: ""
	I0930 21:11:37.891881   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.891889   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:37.891894   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:37.891943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:37.929234   73900 cri.go:89] found id: ""
	I0930 21:11:37.929269   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.929281   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:37.929288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:37.929359   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:37.962350   73900 cri.go:89] found id: ""
	I0930 21:11:37.962378   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.962386   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:37.962391   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:37.962441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:37.996727   73900 cri.go:89] found id: ""
	I0930 21:11:37.996752   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.996760   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:37.996765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:37.996819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:38.029959   73900 cri.go:89] found id: ""
	I0930 21:11:38.029991   73900 logs.go:276] 0 containers: []
	W0930 21:11:38.029999   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:38.030008   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:38.030019   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:38.079836   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:38.079875   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:38.093208   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:38.093236   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:38.168839   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:38.168862   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:38.168873   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:38.244747   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:38.244783   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:40.788841   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:40.802419   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:40.802491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:40.837138   73900 cri.go:89] found id: ""
	I0930 21:11:40.837175   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.837186   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:40.837193   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:40.837255   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:40.870947   73900 cri.go:89] found id: ""
	I0930 21:11:40.870977   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.870987   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:40.870993   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:40.871040   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:40.905004   73900 cri.go:89] found id: ""
	I0930 21:11:40.905033   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.905046   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:40.905053   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:40.905104   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:40.936909   73900 cri.go:89] found id: ""
	I0930 21:11:40.936937   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.936945   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:40.936952   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:40.937015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:40.972601   73900 cri.go:89] found id: ""
	I0930 21:11:40.972630   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.972641   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:40.972646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:40.972704   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:41.007539   73900 cri.go:89] found id: ""
	I0930 21:11:41.007583   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.007594   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:41.007602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:41.007661   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:41.042049   73900 cri.go:89] found id: ""
	I0930 21:11:41.042075   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.042084   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:41.042091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:41.042153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:41.075313   73900 cri.go:89] found id: ""
	I0930 21:11:41.075398   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.075414   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:41.075424   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:41.075440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:41.128683   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:41.128726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:41.142533   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:41.142560   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:41.210149   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:41.210176   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:41.210191   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:41.286547   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:41.286590   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:41.070042   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.570819   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:40.969912   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.468434   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:40.808294   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.307628   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.828902   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:43.842047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:43.842127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:43.876147   73900 cri.go:89] found id: ""
	I0930 21:11:43.876177   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.876187   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:43.876194   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:43.876287   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:43.916351   73900 cri.go:89] found id: ""
	I0930 21:11:43.916383   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.916394   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:43.916404   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:43.916457   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:43.948853   73900 cri.go:89] found id: ""
	I0930 21:11:43.948883   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.948894   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:43.948900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:43.948967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:43.983525   73900 cri.go:89] found id: ""
	I0930 21:11:43.983577   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.983589   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:43.983597   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:43.983656   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:44.021560   73900 cri.go:89] found id: ""
	I0930 21:11:44.021594   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.021606   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:44.021614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:44.021684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:44.057307   73900 cri.go:89] found id: ""
	I0930 21:11:44.057342   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.057353   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:44.057361   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:44.057418   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:44.091120   73900 cri.go:89] found id: ""
	I0930 21:11:44.091145   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.091155   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:44.091162   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:44.091223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:44.125781   73900 cri.go:89] found id: ""
	I0930 21:11:44.125808   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.125817   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:44.125827   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:44.125842   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:44.138699   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:44.138726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:44.208976   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:44.209009   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:44.209026   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:44.285552   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:44.285593   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:44.323412   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:44.323449   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:46.875210   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:46.888532   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:46.888596   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:46.921260   73900 cri.go:89] found id: ""
	I0930 21:11:46.921285   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.921293   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:46.921299   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:46.921357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:46.954645   73900 cri.go:89] found id: ""
	I0930 21:11:46.954675   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.954683   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:46.954688   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:46.954749   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:46.988424   73900 cri.go:89] found id: ""
	I0930 21:11:46.988457   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.988468   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:46.988475   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:46.988535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:47.022635   73900 cri.go:89] found id: ""
	I0930 21:11:47.022664   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.022675   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:47.022682   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:47.022744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:47.056497   73900 cri.go:89] found id: ""
	I0930 21:11:47.056523   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.056530   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:47.056536   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:47.056595   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:47.094983   73900 cri.go:89] found id: ""
	I0930 21:11:47.095011   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.095021   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:47.095028   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:47.095097   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:47.147567   73900 cri.go:89] found id: ""
	I0930 21:11:47.147595   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.147606   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:47.147613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:47.147692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:47.184878   73900 cri.go:89] found id: ""
	I0930 21:11:47.184908   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.184919   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:47.184930   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:47.184943   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:47.258581   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:47.258615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:47.303068   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:47.303100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:47.358749   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:47.358789   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:47.372492   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:47.372531   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:47.443984   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:46.069421   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:48.569013   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:45.968422   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:47.968876   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:45.808341   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:48.306627   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:49.944644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:49.958045   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:49.958124   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:49.993053   73900 cri.go:89] found id: ""
	I0930 21:11:49.993088   73900 logs.go:276] 0 containers: []
	W0930 21:11:49.993100   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:49.993107   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:49.993168   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:50.026171   73900 cri.go:89] found id: ""
	I0930 21:11:50.026197   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.026205   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:50.026210   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:50.026269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:50.060462   73900 cri.go:89] found id: ""
	I0930 21:11:50.060492   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.060502   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:50.060509   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:50.060567   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:50.095385   73900 cri.go:89] found id: ""
	I0930 21:11:50.095414   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.095425   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:50.095432   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:50.095507   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:50.127275   73900 cri.go:89] found id: ""
	I0930 21:11:50.127300   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.127308   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:50.127318   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:50.127378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:50.159810   73900 cri.go:89] found id: ""
	I0930 21:11:50.159836   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.159845   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:50.159850   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:50.159906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:50.191651   73900 cri.go:89] found id: ""
	I0930 21:11:50.191684   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.191695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:50.191702   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:50.191774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:50.225772   73900 cri.go:89] found id: ""
	I0930 21:11:50.225799   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.225809   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:50.225819   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:50.225837   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:50.310189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:50.310223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:50.348934   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:50.348965   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:50.400666   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:50.400703   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:50.415810   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:50.415843   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:50.483773   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:51.069928   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:53.070065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:50.469516   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.968367   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:54.968624   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:50.307903   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.807610   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.984701   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:52.997669   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:52.997745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:53.034012   73900 cri.go:89] found id: ""
	I0930 21:11:53.034044   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.034055   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:53.034063   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:53.034121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:53.068192   73900 cri.go:89] found id: ""
	I0930 21:11:53.068215   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.068222   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:53.068228   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:53.068285   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:53.104683   73900 cri.go:89] found id: ""
	I0930 21:11:53.104710   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.104719   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:53.104724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:53.104778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:53.138713   73900 cri.go:89] found id: ""
	I0930 21:11:53.138745   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.138753   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:53.138759   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:53.138814   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:53.173955   73900 cri.go:89] found id: ""
	I0930 21:11:53.173982   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.173994   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:53.174001   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:53.174060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:53.205942   73900 cri.go:89] found id: ""
	I0930 21:11:53.205970   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.205980   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:53.205987   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:53.206052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:53.241739   73900 cri.go:89] found id: ""
	I0930 21:11:53.241767   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.241776   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:53.241782   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:53.241832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:53.275328   73900 cri.go:89] found id: ""
	I0930 21:11:53.275363   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.275372   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:53.275381   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:53.275397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:53.313732   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:53.313761   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:53.364974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:53.365011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:53.377970   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:53.377999   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:53.445341   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:53.445370   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:53.445388   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.025958   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:56.038367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:56.038434   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:56.074721   73900 cri.go:89] found id: ""
	I0930 21:11:56.074756   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.074767   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:56.074781   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:56.074846   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:56.111491   73900 cri.go:89] found id: ""
	I0930 21:11:56.111525   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.111550   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:56.111572   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:56.111626   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:56.145660   73900 cri.go:89] found id: ""
	I0930 21:11:56.145690   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.145701   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:56.145708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:56.145769   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:56.180865   73900 cri.go:89] found id: ""
	I0930 21:11:56.180891   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.180901   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:56.180908   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:56.180971   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:56.213681   73900 cri.go:89] found id: ""
	I0930 21:11:56.213707   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.213716   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:56.213721   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:56.213772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:56.246683   73900 cri.go:89] found id: ""
	I0930 21:11:56.246711   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.246719   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:56.246724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:56.246774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:56.279651   73900 cri.go:89] found id: ""
	I0930 21:11:56.279679   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.279687   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:56.279692   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:56.279746   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:56.316701   73900 cri.go:89] found id: ""
	I0930 21:11:56.316727   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.316735   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:56.316743   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:56.316753   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:56.329879   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:56.329905   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:56.399919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:56.399949   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:56.399964   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.480200   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:56.480237   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:56.517755   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:56.517782   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:55.568782   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:58.068718   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:57.468492   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.968123   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:55.307809   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:57.308095   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.807355   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.070677   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:59.085884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:59.085956   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:59.119580   73900 cri.go:89] found id: ""
	I0930 21:11:59.119606   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.119615   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:59.119621   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:59.119667   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:59.152087   73900 cri.go:89] found id: ""
	I0930 21:11:59.152111   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.152120   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:59.152127   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:59.152172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:59.186177   73900 cri.go:89] found id: ""
	I0930 21:11:59.186205   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.186213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:59.186220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:59.186276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:59.218800   73900 cri.go:89] found id: ""
	I0930 21:11:59.218821   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.218829   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:59.218835   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:59.218893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:59.254335   73900 cri.go:89] found id: ""
	I0930 21:11:59.254361   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.254372   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:59.254378   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:59.254432   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:59.292406   73900 cri.go:89] found id: ""
	I0930 21:11:59.292441   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.292453   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:59.292460   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:59.292522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:59.333352   73900 cri.go:89] found id: ""
	I0930 21:11:59.333388   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.333399   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:59.333406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:59.333481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:59.377031   73900 cri.go:89] found id: ""
	I0930 21:11:59.377056   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.377064   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:59.377072   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:59.377084   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:59.392626   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:59.392655   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:59.473714   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:59.473741   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:59.473754   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:59.548895   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:59.548931   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:59.589007   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:59.589039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.139243   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:02.152335   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:02.152415   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:02.186942   73900 cri.go:89] found id: ""
	I0930 21:12:02.186980   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.186991   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:02.186999   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:02.187061   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:02.219738   73900 cri.go:89] found id: ""
	I0930 21:12:02.219759   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.219768   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:02.219773   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:02.219820   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:02.253667   73900 cri.go:89] found id: ""
	I0930 21:12:02.253698   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.253707   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:02.253712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:02.253760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:02.290078   73900 cri.go:89] found id: ""
	I0930 21:12:02.290105   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.290115   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:02.290122   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:02.290182   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:02.326408   73900 cri.go:89] found id: ""
	I0930 21:12:02.326436   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.326448   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:02.326455   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:02.326509   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:02.360608   73900 cri.go:89] found id: ""
	I0930 21:12:02.360641   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.360649   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:02.360655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:02.360714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:02.396140   73900 cri.go:89] found id: ""
	I0930 21:12:02.396166   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.396176   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:02.396182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:02.396236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:02.429905   73900 cri.go:89] found id: ""
	I0930 21:12:02.429947   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.429958   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:02.429968   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:02.429986   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:02.506600   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:02.506645   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:02.549325   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:02.549354   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.603614   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:02.603659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:02.618832   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:02.618859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:02.692491   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:00.070569   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:02.569436   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:01.968240   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:04.468583   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:02.306973   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:04.308182   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:05.193131   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:05.206133   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:05.206192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:05.238403   73900 cri.go:89] found id: ""
	I0930 21:12:05.238431   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.238439   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:05.238447   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:05.238523   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:05.271261   73900 cri.go:89] found id: ""
	I0930 21:12:05.271290   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.271303   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:05.271310   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:05.271378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:05.307718   73900 cri.go:89] found id: ""
	I0930 21:12:05.307749   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.307760   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:05.307767   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:05.307832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:05.341336   73900 cri.go:89] found id: ""
	I0930 21:12:05.341379   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.341390   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:05.341398   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:05.341461   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:05.374998   73900 cri.go:89] found id: ""
	I0930 21:12:05.375024   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.375032   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:05.375037   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:05.375085   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:05.410133   73900 cri.go:89] found id: ""
	I0930 21:12:05.410163   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.410174   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:05.410182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:05.410248   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:05.446197   73900 cri.go:89] found id: ""
	I0930 21:12:05.446227   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.446238   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:05.446246   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:05.446305   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:05.480638   73900 cri.go:89] found id: ""
	I0930 21:12:05.480667   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.480683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:05.480691   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:05.480702   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:05.532473   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:05.532512   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:05.547068   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:05.547096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:05.621444   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:05.621472   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:05.621487   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:05.707712   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:05.707767   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:05.068363   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:07.069531   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:06.969695   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:06.969727   73375 pod_ready.go:82] duration metric: took 4m0.008001407s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:06.969736   73375 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:06.969743   73375 pod_ready.go:39] duration metric: took 4m4.053054405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:06.969757   73375 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:12:06.969781   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:06.969835   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:07.024708   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:07.024730   73375 cri.go:89] found id: ""
	I0930 21:12:07.024737   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:07.024805   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.029375   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:07.029439   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:07.063656   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:07.063684   73375 cri.go:89] found id: ""
	I0930 21:12:07.063695   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:07.063754   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.068071   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:07.068126   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:07.102636   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:07.102665   73375 cri.go:89] found id: ""
	I0930 21:12:07.102675   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:07.102733   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.106711   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:07.106791   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:07.142676   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:07.142698   73375 cri.go:89] found id: ""
	I0930 21:12:07.142708   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:07.142766   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.146979   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:07.147041   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:07.189192   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:07.189223   73375 cri.go:89] found id: ""
	I0930 21:12:07.189232   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:07.189283   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.193408   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:07.193484   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:07.230538   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:07.230562   73375 cri.go:89] found id: ""
	I0930 21:12:07.230571   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:07.230630   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.235482   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:07.235573   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:07.274180   73375 cri.go:89] found id: ""
	I0930 21:12:07.274215   73375 logs.go:276] 0 containers: []
	W0930 21:12:07.274226   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:07.274233   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:07.274312   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:07.312851   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:07.312876   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:07.312882   73375 cri.go:89] found id: ""
	I0930 21:12:07.312890   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:07.312947   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.317386   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.321912   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:07.321940   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:07.361674   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:07.361701   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:07.398555   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:07.398615   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:07.432511   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:07.432540   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:07.919639   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:07.919678   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:07.935038   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:07.935067   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:08.059404   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:08.059435   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:08.114569   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:08.114605   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:08.153409   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:08.153447   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.193155   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:08.193187   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:08.260774   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:08.260814   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:08.351488   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:08.351519   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:08.387971   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:08.388012   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:06.805971   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:08.807886   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:08.248038   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:08.261409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:08.261485   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:08.305564   73900 cri.go:89] found id: ""
	I0930 21:12:08.305591   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.305601   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:08.305610   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:08.305669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:08.347816   73900 cri.go:89] found id: ""
	I0930 21:12:08.347844   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.347852   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:08.347858   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:08.347927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:08.381662   73900 cri.go:89] found id: ""
	I0930 21:12:08.381695   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.381705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:08.381712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:08.381829   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:08.427366   73900 cri.go:89] found id: ""
	I0930 21:12:08.427396   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.427406   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:08.427413   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:08.427476   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:08.463419   73900 cri.go:89] found id: ""
	I0930 21:12:08.463443   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.463451   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:08.463457   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:08.463508   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:08.496999   73900 cri.go:89] found id: ""
	I0930 21:12:08.497023   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.497033   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:08.497040   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:08.497098   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:08.530410   73900 cri.go:89] found id: ""
	I0930 21:12:08.530434   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.530442   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:08.530447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:08.530495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:08.563191   73900 cri.go:89] found id: ""
	I0930 21:12:08.563224   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.563235   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:08.563244   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:08.563258   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:08.640305   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:08.640341   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.676404   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:08.676431   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:08.729676   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:08.729736   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:08.743282   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:08.743310   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:08.811334   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.311643   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:11.329153   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:11.329229   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:11.369804   73900 cri.go:89] found id: ""
	I0930 21:12:11.369829   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.369838   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:11.369843   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:11.369896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:11.408530   73900 cri.go:89] found id: ""
	I0930 21:12:11.408558   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.408569   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:11.408580   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:11.408663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.446123   73900 cri.go:89] found id: ""
	I0930 21:12:11.446147   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.446155   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:11.446160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:11.446206   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:11.484019   73900 cri.go:89] found id: ""
	I0930 21:12:11.484044   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.484052   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:11.484057   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:11.484118   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:11.521934   73900 cri.go:89] found id: ""
	I0930 21:12:11.521961   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.521971   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:11.521979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:11.522042   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:11.561253   73900 cri.go:89] found id: ""
	I0930 21:12:11.561283   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.561293   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:11.561299   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:11.561352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:11.602610   73900 cri.go:89] found id: ""
	I0930 21:12:11.602637   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.602648   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:11.602655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:11.602760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:11.637146   73900 cri.go:89] found id: ""
	I0930 21:12:11.637174   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.637185   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:11.637194   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:11.637208   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:11.707627   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.707651   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:11.707668   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:11.786047   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:11.786091   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:11.827128   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:11.827157   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:11.885504   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:11.885542   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:09.569584   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:11.570031   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:14.068184   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:10.950921   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:10.967834   73375 api_server.go:72] duration metric: took 4m15.348038807s to wait for apiserver process to appear ...
	I0930 21:12:10.967876   73375 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:12:10.967922   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:10.967990   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:11.006632   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:11.006667   73375 cri.go:89] found id: ""
	I0930 21:12:11.006677   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:11.006738   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.010931   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:11.010994   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:11.045855   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:11.045882   73375 cri.go:89] found id: ""
	I0930 21:12:11.045893   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:11.045953   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.050058   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:11.050134   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.090954   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:11.090980   73375 cri.go:89] found id: ""
	I0930 21:12:11.090990   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:11.091041   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.095073   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:11.095150   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:11.137413   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:11.137448   73375 cri.go:89] found id: ""
	I0930 21:12:11.137458   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:11.137516   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.141559   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:11.141638   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:11.176921   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:11.176952   73375 cri.go:89] found id: ""
	I0930 21:12:11.176961   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:11.177010   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.181095   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:11.181158   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:11.215117   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:11.215141   73375 cri.go:89] found id: ""
	I0930 21:12:11.215148   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:11.215195   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.218947   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:11.219003   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:11.253901   73375 cri.go:89] found id: ""
	I0930 21:12:11.253937   73375 logs.go:276] 0 containers: []
	W0930 21:12:11.253948   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:11.253955   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:11.254010   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:11.293408   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:11.293434   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:11.293440   73375 cri.go:89] found id: ""
	I0930 21:12:11.293448   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:11.293562   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.297829   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.302572   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:11.302596   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:11.378000   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:11.378037   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:11.415382   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:11.415414   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:11.453703   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:11.453729   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:11.517749   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:11.517780   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:11.556543   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:11.556576   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:12.023270   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:12.023310   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:12.071138   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:12.071170   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:12.086915   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:12.086944   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:12.200046   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:12.200077   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:12.241447   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:12.241475   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:12.296574   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:12.296607   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:12.341982   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:12.342009   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:14.877590   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:12:14.882913   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0930 21:12:14.884088   73375 api_server.go:141] control plane version: v1.31.1
	I0930 21:12:14.884106   73375 api_server.go:131] duration metric: took 3.916223308s to wait for apiserver health ...
	I0930 21:12:14.884113   73375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:12:14.884134   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:14.884185   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:14.926932   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:14.926952   73375 cri.go:89] found id: ""
	I0930 21:12:14.926960   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:14.927003   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:14.931044   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:14.931106   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:14.967622   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:14.967645   73375 cri.go:89] found id: ""
	I0930 21:12:14.967652   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:14.967698   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:14.972152   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:14.972221   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.307501   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:13.307687   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:14.400848   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:14.413794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:14.413882   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:14.449799   73900 cri.go:89] found id: ""
	I0930 21:12:14.449830   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.449841   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:14.449849   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:14.449902   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:14.486301   73900 cri.go:89] found id: ""
	I0930 21:12:14.486330   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.486357   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:14.486365   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:14.486427   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:14.520451   73900 cri.go:89] found id: ""
	I0930 21:12:14.520479   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.520487   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:14.520497   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:14.520558   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:14.554056   73900 cri.go:89] found id: ""
	I0930 21:12:14.554095   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.554107   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:14.554114   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:14.554178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:14.594054   73900 cri.go:89] found id: ""
	I0930 21:12:14.594080   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.594088   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:14.594094   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:14.594142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:14.630225   73900 cri.go:89] found id: ""
	I0930 21:12:14.630255   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.630278   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:14.630284   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:14.630335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:14.663006   73900 cri.go:89] found id: ""
	I0930 21:12:14.663043   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.663054   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:14.663061   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:14.663119   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:14.699815   73900 cri.go:89] found id: ""
	I0930 21:12:14.699845   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.699858   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:14.699870   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:14.699886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:14.751465   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:14.751509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:14.766401   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:14.766432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:14.832979   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:14.833002   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:14.833016   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:14.918011   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:14.918051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.458886   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:17.471833   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:17.471918   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:17.505109   73900 cri.go:89] found id: ""
	I0930 21:12:17.505135   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.505145   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:17.505151   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:17.505213   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:17.538091   73900 cri.go:89] found id: ""
	I0930 21:12:17.538118   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.538129   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:17.538136   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:17.538308   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:17.571668   73900 cri.go:89] found id: ""
	I0930 21:12:17.571694   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.571705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:17.571712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:17.571770   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:17.607391   73900 cri.go:89] found id: ""
	I0930 21:12:17.607431   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.607442   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:17.607452   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:17.607519   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:17.643271   73900 cri.go:89] found id: ""
	I0930 21:12:17.643297   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.643305   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:17.643313   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:17.643382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:17.676653   73900 cri.go:89] found id: ""
	I0930 21:12:17.676687   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.676698   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:17.676708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:17.676772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:17.709570   73900 cri.go:89] found id: ""
	I0930 21:12:17.709602   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.709610   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:17.709615   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:17.709671   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:17.747857   73900 cri.go:89] found id: ""
	I0930 21:12:17.747883   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.747891   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:17.747902   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:17.747915   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:15.010874   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:15.010898   73375 cri.go:89] found id: ""
	I0930 21:12:15.010905   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:15.010947   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.015490   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:15.015582   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:15.051182   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:15.051210   73375 cri.go:89] found id: ""
	I0930 21:12:15.051220   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:15.051291   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.055057   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:15.055107   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:15.093126   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:15.093150   73375 cri.go:89] found id: ""
	I0930 21:12:15.093159   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:15.093214   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.097138   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:15.097200   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:15.131676   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:15.131704   73375 cri.go:89] found id: ""
	I0930 21:12:15.131716   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:15.131773   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.135550   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:15.135620   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:15.170579   73375 cri.go:89] found id: ""
	I0930 21:12:15.170604   73375 logs.go:276] 0 containers: []
	W0930 21:12:15.170612   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:15.170618   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:15.170672   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:15.205190   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:15.205216   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:15.205222   73375 cri.go:89] found id: ""
	I0930 21:12:15.205231   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:15.205287   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.209426   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.212981   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:15.213002   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:15.281543   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:15.281582   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:15.325855   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:15.325895   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:15.367382   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:15.367429   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:15.441395   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:15.441432   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:15.482487   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:15.482518   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:15.520298   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:15.520335   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:15.572596   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:15.572626   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:15.618087   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:15.618120   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:15.634125   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:15.634151   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:15.744355   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:15.744390   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:15.799312   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:15.799345   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:15.838934   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:15.838969   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:18.759947   73375 system_pods.go:59] 8 kube-system pods found
	I0930 21:12:18.759976   73375 system_pods.go:61] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running
	I0930 21:12:18.759981   73375 system_pods.go:61] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running
	I0930 21:12:18.759985   73375 system_pods.go:61] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running
	I0930 21:12:18.759989   73375 system_pods.go:61] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running
	I0930 21:12:18.759992   73375 system_pods.go:61] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:12:18.759995   73375 system_pods.go:61] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running
	I0930 21:12:18.760001   73375 system_pods.go:61] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:18.760006   73375 system_pods.go:61] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:12:18.760016   73375 system_pods.go:74] duration metric: took 3.875896906s to wait for pod list to return data ...
	I0930 21:12:18.760024   73375 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:12:18.762755   73375 default_sa.go:45] found service account: "default"
	I0930 21:12:18.762777   73375 default_sa.go:55] duration metric: took 2.746721ms for default service account to be created ...
	I0930 21:12:18.762787   73375 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:12:18.769060   73375 system_pods.go:86] 8 kube-system pods found
	I0930 21:12:18.769086   73375 system_pods.go:89] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running
	I0930 21:12:18.769091   73375 system_pods.go:89] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running
	I0930 21:12:18.769095   73375 system_pods.go:89] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running
	I0930 21:12:18.769099   73375 system_pods.go:89] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running
	I0930 21:12:18.769104   73375 system_pods.go:89] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:12:18.769107   73375 system_pods.go:89] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running
	I0930 21:12:18.769113   73375 system_pods.go:89] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:18.769129   73375 system_pods.go:89] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:12:18.769136   73375 system_pods.go:126] duration metric: took 6.344583ms to wait for k8s-apps to be running ...
	I0930 21:12:18.769144   73375 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:12:18.769183   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:18.785488   73375 system_svc.go:56] duration metric: took 16.335135ms WaitForService to wait for kubelet
	I0930 21:12:18.785544   73375 kubeadm.go:582] duration metric: took 4m23.165751441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:12:18.785572   73375 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:12:18.789308   73375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:12:18.789340   73375 node_conditions.go:123] node cpu capacity is 2
	I0930 21:12:18.789356   73375 node_conditions.go:105] duration metric: took 3.778609ms to run NodePressure ...
	I0930 21:12:18.789370   73375 start.go:241] waiting for startup goroutines ...
	I0930 21:12:18.789379   73375 start.go:246] waiting for cluster config update ...
	I0930 21:12:18.789394   73375 start.go:255] writing updated cluster config ...
	I0930 21:12:18.789688   73375 ssh_runner.go:195] Run: rm -f paused
	I0930 21:12:18.837384   73375 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:12:18.839699   73375 out.go:177] * Done! kubectl is now configured to use "no-preload-997816" cluster and "default" namespace by default
	I0930 21:12:16.070108   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:18.569568   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:15.308534   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:15.308581   73707 pod_ready.go:82] duration metric: took 4m0.007893146s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:15.308595   73707 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:15.308605   73707 pod_ready.go:39] duration metric: took 4m2.806797001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:15.308621   73707 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:12:15.308657   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:15.308722   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:15.353287   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:15.353348   73707 cri.go:89] found id: ""
	I0930 21:12:15.353359   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:15.353416   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.357602   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:15.357696   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:15.399289   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:15.399325   73707 cri.go:89] found id: ""
	I0930 21:12:15.399332   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:15.399377   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.404757   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:15.404832   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:15.454396   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:15.454423   73707 cri.go:89] found id: ""
	I0930 21:12:15.454433   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:15.454493   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.458660   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:15.458743   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:15.493941   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:15.493971   73707 cri.go:89] found id: ""
	I0930 21:12:15.493982   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:15.494055   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.498541   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:15.498628   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:15.535354   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:15.535385   73707 cri.go:89] found id: ""
	I0930 21:12:15.535395   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:15.535454   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.540097   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:15.540168   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:15.583969   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:15.583996   73707 cri.go:89] found id: ""
	I0930 21:12:15.584003   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:15.584051   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.589193   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:15.589260   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:15.629413   73707 cri.go:89] found id: ""
	I0930 21:12:15.629440   73707 logs.go:276] 0 containers: []
	W0930 21:12:15.629449   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:15.629454   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:15.629506   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:15.670129   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:15.670160   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:15.670166   73707 cri.go:89] found id: ""
	I0930 21:12:15.670175   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:15.670237   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.674227   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.678252   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:15.678276   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:15.758280   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:15.758319   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:15.778191   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:15.778222   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:15.930379   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:15.930422   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:15.966732   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:15.966759   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:16.004304   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:16.004337   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:16.043705   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:16.043733   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:16.600173   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:16.600210   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:16.651837   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:16.651868   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:16.695122   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:16.695155   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:16.737622   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:16.737671   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:16.772913   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:16.772944   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:16.808196   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:16.808224   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.368150   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:19.385771   73707 api_server.go:72] duration metric: took 4m14.101602019s to wait for apiserver process to appear ...
	I0930 21:12:19.385798   73707 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:12:19.385831   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:19.385889   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:19.421325   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:19.421354   73707 cri.go:89] found id: ""
	I0930 21:12:19.421364   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:19.421426   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.428045   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:19.428107   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:19.466034   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:19.466054   73707 cri.go:89] found id: ""
	I0930 21:12:19.466061   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:19.466102   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.470155   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:19.470222   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:19.504774   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:19.504799   73707 cri.go:89] found id: ""
	I0930 21:12:19.504806   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:19.504869   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.509044   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:19.509134   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:19.544204   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:19.544228   73707 cri.go:89] found id: ""
	I0930 21:12:19.544235   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:19.544293   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.549103   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:19.549194   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:19.591381   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:19.591416   73707 cri.go:89] found id: ""
	I0930 21:12:19.591425   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:19.591472   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.595522   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:19.595621   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:19.634816   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.634841   73707 cri.go:89] found id: ""
	I0930 21:12:19.634850   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:19.634894   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.639391   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:19.639450   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:19.675056   73707 cri.go:89] found id: ""
	I0930 21:12:19.675084   73707 logs.go:276] 0 containers: []
	W0930 21:12:19.675095   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:19.675102   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:19.675159   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:19.708641   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:19.708666   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:19.708672   73707 cri.go:89] found id: ""
	I0930 21:12:19.708682   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:19.708738   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.712636   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.716653   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:19.716680   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:19.785159   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:19.785203   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:19.823462   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:19.823490   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:19.856776   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:19.856808   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:19.893919   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:19.893948   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:19.930932   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:19.930978   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.988120   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:19.988164   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:20.027576   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:20.027618   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:20.041523   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:20.041557   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:20.157598   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:20.157630   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:20.213353   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:20.213384   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:20.254502   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:20.254533   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:17.824584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:17.824623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.862613   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:17.862643   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:17.915954   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:17.915992   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:17.929824   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:17.929853   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:17.999697   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:20.500449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:20.514042   73900 kubeadm.go:597] duration metric: took 4m1.91059878s to restartPrimaryControlPlane
	W0930 21:12:20.514119   73900 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 21:12:20.514158   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:12:21.675376   73900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.161176988s)
	I0930 21:12:21.675465   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:21.689467   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:12:21.698504   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:12:21.708418   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:12:21.708437   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:12:21.708483   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:12:21.716960   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:12:21.717019   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:12:21.727610   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:12:21.736212   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:12:21.736275   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:12:21.745512   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.754299   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:12:21.754366   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.763724   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:12:21.772521   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:12:21.772595   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:12:21.782980   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:12:21.850463   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:12:21.850558   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:12:21.991521   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:12:21.991706   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:12:21.991849   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:12:22.174876   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:12:22.177037   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:12:22.177155   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:12:22.177253   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:12:22.177379   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:12:22.178789   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:12:22.178860   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:12:22.178907   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:12:22.178961   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:12:22.179017   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:12:22.179139   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:12:22.179247   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:12:22.179310   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:12:22.179398   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:12:22.253256   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:12:22.661237   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:12:22.947987   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:12:23.170995   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:12:23.184583   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:12:23.185770   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:12:23.185813   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:12:23.334769   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:12:21.069777   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:23.070328   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:20.696951   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:20.696989   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:23.236734   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:12:23.241215   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 200:
	ok
	I0930 21:12:23.242629   73707 api_server.go:141] control plane version: v1.31.1
	I0930 21:12:23.242651   73707 api_server.go:131] duration metric: took 3.856847284s to wait for apiserver health ...
	I0930 21:12:23.242660   73707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:12:23.242680   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:23.242724   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:23.279601   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:23.279626   73707 cri.go:89] found id: ""
	I0930 21:12:23.279633   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:23.279692   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.283900   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:23.283977   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:23.320360   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:23.320397   73707 cri.go:89] found id: ""
	I0930 21:12:23.320410   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:23.320472   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.324745   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:23.324825   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:23.368001   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:23.368024   73707 cri.go:89] found id: ""
	I0930 21:12:23.368034   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:23.368095   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.372001   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:23.372077   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:23.408203   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:23.408234   73707 cri.go:89] found id: ""
	I0930 21:12:23.408242   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:23.408299   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.412328   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:23.412397   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:23.462142   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:23.462173   73707 cri.go:89] found id: ""
	I0930 21:12:23.462183   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:23.462247   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.466257   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:23.466336   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:23.509075   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:23.509098   73707 cri.go:89] found id: ""
	I0930 21:12:23.509109   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:23.509169   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.513362   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:23.513441   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:23.553711   73707 cri.go:89] found id: ""
	I0930 21:12:23.553738   73707 logs.go:276] 0 containers: []
	W0930 21:12:23.553746   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:23.553752   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:23.553797   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:23.599596   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:23.599629   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:23.599635   73707 cri.go:89] found id: ""
	I0930 21:12:23.599644   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:23.599699   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.603589   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.607827   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:23.607855   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:23.621046   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:23.621069   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:23.664703   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:23.664735   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:23.700614   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:23.700644   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:23.738113   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:23.738143   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:23.775706   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:23.775733   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:23.840419   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:23.840454   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:23.876827   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:23.876860   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:23.943636   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:23.943675   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:24.052729   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:24.052763   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:24.106526   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:24.106556   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:24.146914   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:24.146941   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:24.527753   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:24.527804   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:27.077689   73707 system_pods.go:59] 8 kube-system pods found
	I0930 21:12:27.077721   73707 system_pods.go:61] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running
	I0930 21:12:27.077726   73707 system_pods.go:61] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running
	I0930 21:12:27.077731   73707 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running
	I0930 21:12:27.077739   73707 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running
	I0930 21:12:27.077744   73707 system_pods.go:61] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running
	I0930 21:12:27.077749   73707 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running
	I0930 21:12:27.077759   73707 system_pods.go:61] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:27.077766   73707 system_pods.go:61] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running
	I0930 21:12:27.077774   73707 system_pods.go:74] duration metric: took 3.835107861s to wait for pod list to return data ...
	I0930 21:12:27.077783   73707 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:12:27.082269   73707 default_sa.go:45] found service account: "default"
	I0930 21:12:27.082292   73707 default_sa.go:55] duration metric: took 4.502111ms for default service account to be created ...
	I0930 21:12:27.082299   73707 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:12:27.086738   73707 system_pods.go:86] 8 kube-system pods found
	I0930 21:12:27.086764   73707 system_pods.go:89] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running
	I0930 21:12:27.086770   73707 system_pods.go:89] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running
	I0930 21:12:27.086775   73707 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running
	I0930 21:12:27.086781   73707 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running
	I0930 21:12:27.086784   73707 system_pods.go:89] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running
	I0930 21:12:27.086788   73707 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running
	I0930 21:12:27.086796   73707 system_pods.go:89] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:27.086803   73707 system_pods.go:89] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running
	I0930 21:12:27.086811   73707 system_pods.go:126] duration metric: took 4.506701ms to wait for k8s-apps to be running ...
	I0930 21:12:27.086820   73707 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:12:27.086868   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:27.102286   73707 system_svc.go:56] duration metric: took 15.455734ms WaitForService to wait for kubelet
	I0930 21:12:27.102325   73707 kubeadm.go:582] duration metric: took 4m21.818162682s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:12:27.102346   73707 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:12:27.105332   73707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:12:27.105354   73707 node_conditions.go:123] node cpu capacity is 2
	I0930 21:12:27.105364   73707 node_conditions.go:105] duration metric: took 3.013328ms to run NodePressure ...
	I0930 21:12:27.105375   73707 start.go:241] waiting for startup goroutines ...
	I0930 21:12:27.105382   73707 start.go:246] waiting for cluster config update ...
	I0930 21:12:27.105393   73707 start.go:255] writing updated cluster config ...
	I0930 21:12:27.105669   73707 ssh_runner.go:195] Run: rm -f paused
	I0930 21:12:27.156804   73707 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:12:27.158887   73707 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-291511" cluster and "default" namespace by default
	I0930 21:12:23.336604   73900 out.go:235]   - Booting up control plane ...
	I0930 21:12:23.336747   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:12:23.345737   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:12:23.346784   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:12:23.347559   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:12:23.351009   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:12:25.568654   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:27.569042   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:29.570978   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:32.069065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:34.069347   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:36.568228   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:38.569351   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:40.569552   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:43.069456   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:45.569254   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:47.569647   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:49.569997   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:52.069284   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:54.069870   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:54.563572   73256 pod_ready.go:82] duration metric: took 4m0.000782781s for pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:54.563605   73256 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:54.563620   73256 pod_ready.go:39] duration metric: took 4m9.49309261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:54.563643   73256 kubeadm.go:597] duration metric: took 4m18.399318281s to restartPrimaryControlPlane
	W0930 21:12:54.563698   73256 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 21:12:54.563721   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:13:03.351822   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:13:03.352632   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:03.352833   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:08.353230   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:08.353429   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:20.634441   73256 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.070691776s)
	I0930 21:13:20.634529   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:13:20.650312   73256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:13:20.661782   73256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:13:20.671436   73256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:13:20.671463   73256 kubeadm.go:157] found existing configuration files:
	
	I0930 21:13:20.671504   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:13:20.681860   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:13:20.681934   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:13:20.692529   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:13:20.701507   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:13:20.701585   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:13:20.711211   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:13:20.721856   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:13:20.721928   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:13:20.733194   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:13:20.743887   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:13:20.743955   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:13:20.753546   73256 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:13:20.799739   73256 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 21:13:20.799812   73256 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:13:20.906464   73256 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:13:20.906569   73256 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:13:20.906647   73256 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 21:13:20.919451   73256 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:13:20.921440   73256 out.go:235]   - Generating certificates and keys ...
	I0930 21:13:20.921550   73256 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:13:20.921645   73256 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:13:20.921758   73256 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:13:20.921845   73256 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:13:20.921945   73256 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:13:20.922021   73256 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:13:20.922117   73256 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:13:20.922190   73256 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:13:20.922262   73256 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:13:20.922336   73256 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:13:20.922370   73256 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:13:20.922459   73256 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:13:21.079731   73256 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:13:21.214199   73256 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 21:13:21.344405   73256 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:13:21.605006   73256 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:13:21.718432   73256 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:13:21.718967   73256 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:13:21.723434   73256 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:13:18.354150   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:18.354468   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:21.725304   73256 out.go:235]   - Booting up control plane ...
	I0930 21:13:21.725435   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:13:21.725526   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:13:21.725637   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:13:21.743582   73256 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:13:21.749533   73256 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:13:21.749605   73256 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:13:21.873716   73256 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 21:13:21.873867   73256 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 21:13:22.375977   73256 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.402537ms
	I0930 21:13:22.376098   73256 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 21:13:27.379510   73256 kubeadm.go:310] [api-check] The API server is healthy after 5.001265494s
	I0930 21:13:27.392047   73256 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 21:13:27.409550   73256 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 21:13:27.447693   73256 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 21:13:27.447896   73256 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-256103 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 21:13:27.462338   73256 kubeadm.go:310] [bootstrap-token] Using token: k5ffj3.6sqmy7prwrlhrg7s
	I0930 21:13:27.463967   73256 out.go:235]   - Configuring RBAC rules ...
	I0930 21:13:27.464076   73256 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 21:13:27.472107   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 21:13:27.481172   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 21:13:27.485288   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 21:13:27.492469   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 21:13:27.496822   73256 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 21:13:27.789372   73256 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 21:13:28.210679   73256 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 21:13:28.784869   73256 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 21:13:28.785859   73256 kubeadm.go:310] 
	I0930 21:13:28.785954   73256 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 21:13:28.785967   73256 kubeadm.go:310] 
	I0930 21:13:28.786045   73256 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 21:13:28.786077   73256 kubeadm.go:310] 
	I0930 21:13:28.786121   73256 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 21:13:28.786219   73256 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 21:13:28.786286   73256 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 21:13:28.786304   73256 kubeadm.go:310] 
	I0930 21:13:28.786395   73256 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 21:13:28.786405   73256 kubeadm.go:310] 
	I0930 21:13:28.786464   73256 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 21:13:28.786474   73256 kubeadm.go:310] 
	I0930 21:13:28.786546   73256 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 21:13:28.786658   73256 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 21:13:28.786754   73256 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 21:13:28.786763   73256 kubeadm.go:310] 
	I0930 21:13:28.786870   73256 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 21:13:28.786991   73256 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 21:13:28.787000   73256 kubeadm.go:310] 
	I0930 21:13:28.787122   73256 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k5ffj3.6sqmy7prwrlhrg7s \
	I0930 21:13:28.787240   73256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 21:13:28.787274   73256 kubeadm.go:310] 	--control-plane 
	I0930 21:13:28.787290   73256 kubeadm.go:310] 
	I0930 21:13:28.787415   73256 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 21:13:28.787425   73256 kubeadm.go:310] 
	I0930 21:13:28.787547   73256 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k5ffj3.6sqmy7prwrlhrg7s \
	I0930 21:13:28.787713   73256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 21:13:28.788805   73256 kubeadm.go:310] W0930 21:13:20.776526    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:13:28.789058   73256 kubeadm.go:310] W0930 21:13:20.777323    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:13:28.789158   73256 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:13:28.789178   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:13:28.789187   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:13:28.791049   73256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:13:28.792381   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:13:28.802872   73256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:13:28.819952   73256 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:13:28.820054   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:28.820070   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-256103 minikube.k8s.io/updated_at=2024_09_30T21_13_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=embed-certs-256103 minikube.k8s.io/primary=true
	I0930 21:13:28.859770   73256 ops.go:34] apiserver oom_adj: -16
	I0930 21:13:29.026274   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:29.526992   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:30.026700   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:30.526962   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:31.027165   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:31.526632   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:32.027019   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:32.526522   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:33.026739   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:33.116028   73256 kubeadm.go:1113] duration metric: took 4.296036786s to wait for elevateKubeSystemPrivileges
	I0930 21:13:33.116067   73256 kubeadm.go:394] duration metric: took 4m57.005787187s to StartCluster
	I0930 21:13:33.116088   73256 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:13:33.116175   73256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:13:33.117855   73256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:13:33.118142   73256 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:13:33.118263   73256 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:13:33.118420   73256 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-256103"
	I0930 21:13:33.118373   73256 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:13:33.118446   73256 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-256103"
	I0930 21:13:33.118442   73256 addons.go:69] Setting default-storageclass=true in profile "embed-certs-256103"
	W0930 21:13:33.118453   73256 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:13:33.118464   73256 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-256103"
	I0930 21:13:33.118482   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.118515   73256 addons.go:69] Setting metrics-server=true in profile "embed-certs-256103"
	I0930 21:13:33.118554   73256 addons.go:234] Setting addon metrics-server=true in "embed-certs-256103"
	W0930 21:13:33.118564   73256 addons.go:243] addon metrics-server should already be in state true
	I0930 21:13:33.118594   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.118807   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118840   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.118880   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118926   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.118941   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118965   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.120042   73256 out.go:177] * Verifying Kubernetes components...
	I0930 21:13:33.121706   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:13:33.136554   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0930 21:13:33.137096   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.137304   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0930 21:13:33.137664   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.137696   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.137789   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.138013   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.138176   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.138317   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.138336   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.139163   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37389
	I0930 21:13:33.139176   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.139733   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.139903   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.139955   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.140284   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.140311   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.140780   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.141336   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.141375   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.141814   73256 addons.go:234] Setting addon default-storageclass=true in "embed-certs-256103"
	W0930 21:13:33.141832   73256 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:13:33.141857   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.142143   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.142177   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.161937   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0930 21:13:33.162096   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33657
	I0930 21:13:33.162249   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42531
	I0930 21:13:33.162491   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.162536   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.162837   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.163017   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163028   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163030   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163045   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163254   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163265   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163362   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.163417   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.163864   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.163899   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.164101   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.164154   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.164356   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.166460   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.166673   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.168464   73256 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:13:33.168631   73256 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:13:33.169822   73256 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:13:33.169840   73256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:13:33.169857   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.169937   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:13:33.169947   73256 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:13:33.169963   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.174613   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.174653   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175236   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.175265   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175372   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.175405   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175667   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.176048   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.176051   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.176299   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.176299   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.176476   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.176684   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.176685   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.180520   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I0930 21:13:33.180968   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.181564   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.181588   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.181938   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.182136   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.183803   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.184001   73256 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:13:33.184017   73256 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:13:33.184035   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.186565   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.186964   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.186996   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.187311   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.187481   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.187797   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.187937   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.337289   73256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:13:33.360186   73256 node_ready.go:35] waiting up to 6m0s for node "embed-certs-256103" to be "Ready" ...
	I0930 21:13:33.372799   73256 node_ready.go:49] node "embed-certs-256103" has status "Ready":"True"
	I0930 21:13:33.372828   73256 node_ready.go:38] duration metric: took 12.601736ms for node "embed-certs-256103" to be "Ready" ...
	I0930 21:13:33.372837   73256 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:13:33.379694   73256 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:33.462144   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:13:33.500072   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:13:33.500102   73256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:13:33.524789   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:13:33.548931   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:13:33.548955   73256 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:13:33.604655   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:13:33.604682   73256 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:13:33.648687   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:13:34.533493   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.008666954s)
	I0930 21:13:34.533555   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.533566   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.533856   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.533870   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.533884   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.533892   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.533900   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.534108   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.534126   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.534149   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.535651   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.073475648s)
	I0930 21:13:34.535695   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.535706   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.535926   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.536001   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.536014   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.536030   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.535981   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.537450   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.537470   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.537480   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.564363   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.564394   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.564715   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.564739   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968266   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.319532564s)
	I0930 21:13:34.968330   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.968350   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.968642   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.968665   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968674   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.968673   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.968681   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.968944   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.968969   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968973   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.968979   73256 addons.go:475] Verifying addon metrics-server=true in "embed-certs-256103"
	I0930 21:13:34.970656   73256 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0930 21:13:34.971966   73256 addons.go:510] duration metric: took 1.853709741s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0930 21:13:35.387687   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:37.388374   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:39.886425   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:41.885713   73256 pod_ready.go:93] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.885737   73256 pod_ready.go:82] duration metric: took 8.506004979s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.885746   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.891032   73256 pod_ready.go:93] pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.891052   73256 pod_ready.go:82] duration metric: took 5.300379ms for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.891061   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.895332   73256 pod_ready.go:93] pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.895349   73256 pod_ready.go:82] duration metric: took 4.282199ms for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.895357   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-glbsg" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.899518   73256 pod_ready.go:93] pod "kube-proxy-glbsg" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.899556   73256 pod_ready.go:82] duration metric: took 4.191815ms for pod "kube-proxy-glbsg" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.899567   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.904184   73256 pod_ready.go:93] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.904203   73256 pod_ready.go:82] duration metric: took 4.628533ms for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.904209   73256 pod_ready.go:39] duration metric: took 8.531361398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:13:41.904221   73256 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:13:41.904262   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:13:41.919570   73256 api_server.go:72] duration metric: took 8.801387692s to wait for apiserver process to appear ...
	I0930 21:13:41.919591   73256 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:13:41.919607   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:13:41.923810   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I0930 21:13:41.924633   73256 api_server.go:141] control plane version: v1.31.1
	I0930 21:13:41.924651   73256 api_server.go:131] duration metric: took 5.054857ms to wait for apiserver health ...
	I0930 21:13:41.924659   73256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:13:42.086431   73256 system_pods.go:59] 9 kube-system pods found
	I0930 21:13:42.086468   73256 system_pods.go:61] "coredns-7c65d6cfc9-gt5tt" [165faaf0-866c-4097-9bdb-ed58fe8d7395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.086480   73256 system_pods.go:61] "coredns-7c65d6cfc9-sgsbn" [c97fdb50-c6a0-4ef8-8c01-ea45ed18b72a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.086488   73256 system_pods.go:61] "etcd-embed-certs-256103" [6aac0706-7dbd-4655-b261-68877299d81a] Running
	I0930 21:13:42.086494   73256 system_pods.go:61] "kube-apiserver-embed-certs-256103" [6c8e3157-ec97-4a85-8947-ca7541c19b1c] Running
	I0930 21:13:42.086500   73256 system_pods.go:61] "kube-controller-manager-embed-certs-256103" [1e3f76d1-d343-4127-aad9-8a5a8e589a43] Running
	I0930 21:13:42.086505   73256 system_pods.go:61] "kube-proxy-glbsg" [f68e378f-ce0f-4603-bd8e-93334f04f7a7] Running
	I0930 21:13:42.086510   73256 system_pods.go:61] "kube-scheduler-embed-certs-256103" [29f55c6f-9603-4cd2-a798-0ff2362b7607] Running
	I0930 21:13:42.086518   73256 system_pods.go:61] "metrics-server-6867b74b74-5mhkh" [470424ec-bb66-4d62-904d-0d4ad93fa5bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:13:42.086525   73256 system_pods.go:61] "storage-provisioner" [a07a5a12-7420-4b57-b79d-982f4bb48232] Running
	I0930 21:13:42.086538   73256 system_pods.go:74] duration metric: took 161.870121ms to wait for pod list to return data ...
	I0930 21:13:42.086559   73256 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:13:42.284282   73256 default_sa.go:45] found service account: "default"
	I0930 21:13:42.284307   73256 default_sa.go:55] duration metric: took 197.73827ms for default service account to be created ...
	I0930 21:13:42.284316   73256 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:13:42.486445   73256 system_pods.go:86] 9 kube-system pods found
	I0930 21:13:42.486478   73256 system_pods.go:89] "coredns-7c65d6cfc9-gt5tt" [165faaf0-866c-4097-9bdb-ed58fe8d7395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.486489   73256 system_pods.go:89] "coredns-7c65d6cfc9-sgsbn" [c97fdb50-c6a0-4ef8-8c01-ea45ed18b72a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.486497   73256 system_pods.go:89] "etcd-embed-certs-256103" [6aac0706-7dbd-4655-b261-68877299d81a] Running
	I0930 21:13:42.486503   73256 system_pods.go:89] "kube-apiserver-embed-certs-256103" [6c8e3157-ec97-4a85-8947-ca7541c19b1c] Running
	I0930 21:13:42.486509   73256 system_pods.go:89] "kube-controller-manager-embed-certs-256103" [1e3f76d1-d343-4127-aad9-8a5a8e589a43] Running
	I0930 21:13:42.486513   73256 system_pods.go:89] "kube-proxy-glbsg" [f68e378f-ce0f-4603-bd8e-93334f04f7a7] Running
	I0930 21:13:42.486518   73256 system_pods.go:89] "kube-scheduler-embed-certs-256103" [29f55c6f-9603-4cd2-a798-0ff2362b7607] Running
	I0930 21:13:42.486526   73256 system_pods.go:89] "metrics-server-6867b74b74-5mhkh" [470424ec-bb66-4d62-904d-0d4ad93fa5bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:13:42.486533   73256 system_pods.go:89] "storage-provisioner" [a07a5a12-7420-4b57-b79d-982f4bb48232] Running
	I0930 21:13:42.486542   73256 system_pods.go:126] duration metric: took 202.220435ms to wait for k8s-apps to be running ...
	I0930 21:13:42.486552   73256 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:13:42.486601   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:13:42.501286   73256 system_svc.go:56] duration metric: took 14.699273ms WaitForService to wait for kubelet
	I0930 21:13:42.501315   73256 kubeadm.go:582] duration metric: took 9.38313627s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:13:42.501332   73256 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:13:42.685282   73256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:13:42.685314   73256 node_conditions.go:123] node cpu capacity is 2
	I0930 21:13:42.685326   73256 node_conditions.go:105] duration metric: took 183.989963ms to run NodePressure ...
	I0930 21:13:42.685346   73256 start.go:241] waiting for startup goroutines ...
	I0930 21:13:42.685356   73256 start.go:246] waiting for cluster config update ...
	I0930 21:13:42.685371   73256 start.go:255] writing updated cluster config ...
	I0930 21:13:42.685664   73256 ssh_runner.go:195] Run: rm -f paused
	I0930 21:13:42.734778   73256 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:13:42.736658   73256 out.go:177] * Done! kubectl is now configured to use "embed-certs-256103" cluster and "default" namespace by default
	I0930 21:13:38.355123   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:38.355330   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357098   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:14:18.357396   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357419   73900 kubeadm.go:310] 
	I0930 21:14:18.357473   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:14:18.357541   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:14:18.357554   73900 kubeadm.go:310] 
	I0930 21:14:18.357609   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:14:18.357659   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:14:18.357801   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:14:18.357817   73900 kubeadm.go:310] 
	I0930 21:14:18.357964   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:14:18.357996   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:14:18.358028   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:14:18.358039   73900 kubeadm.go:310] 
	I0930 21:14:18.358174   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:14:18.358318   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:14:18.358331   73900 kubeadm.go:310] 
	I0930 21:14:18.358510   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:14:18.358646   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:14:18.358764   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:14:18.358866   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:14:18.358882   73900 kubeadm.go:310] 
	I0930 21:14:18.359454   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:14:18.359595   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:14:18.359681   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0930 21:14:18.359797   73900 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0930 21:14:18.359841   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:14:18.820244   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:14:18.834938   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:14:18.844779   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:14:18.844803   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:14:18.844856   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:14:18.853738   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:14:18.853811   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:14:18.863366   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:14:18.872108   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:14:18.872164   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:14:18.881818   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.890916   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:14:18.890969   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.900075   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:14:18.908449   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:14:18.908520   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:14:18.917163   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:14:18.983181   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:14:18.983233   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:14:19.121356   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:14:19.121545   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:14:19.121674   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:14:19.306639   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:14:19.309593   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:14:19.309683   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:14:19.309748   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:14:19.309870   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:14:19.309957   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:14:19.310040   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:14:19.310119   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:14:19.310209   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:14:19.310292   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:14:19.310404   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:14:19.310511   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:14:19.310567   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:14:19.310654   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:14:19.453872   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:14:19.621232   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:14:19.797694   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:14:19.886897   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:14:19.909016   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:14:19.910536   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:14:19.910617   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:14:20.052878   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:14:20.054739   73900 out.go:235]   - Booting up control plane ...
	I0930 21:14:20.054881   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:14:20.068419   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:14:20.068512   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:14:20.068697   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:14:20.072015   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:15:00.073988   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:15:00.074795   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:00.075068   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:05.075810   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:05.076061   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:15.076695   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:15.076928   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:35.077652   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:35.077862   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.076816   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:16:15.077063   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.077082   73900 kubeadm.go:310] 
	I0930 21:16:15.077136   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:16:15.077188   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:16:15.077198   73900 kubeadm.go:310] 
	I0930 21:16:15.077246   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:16:15.077298   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:16:15.077425   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:16:15.077442   73900 kubeadm.go:310] 
	I0930 21:16:15.077605   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:16:15.077651   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:16:15.077710   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:16:15.077718   73900 kubeadm.go:310] 
	I0930 21:16:15.077851   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:16:15.077997   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:16:15.078013   73900 kubeadm.go:310] 
	I0930 21:16:15.078143   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:16:15.078229   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:16:15.078309   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:16:15.078419   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:16:15.078431   73900 kubeadm.go:310] 
	I0930 21:16:15.079235   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:16:15.079365   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:16:15.079442   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0930 21:16:15.079572   73900 kubeadm.go:394] duration metric: took 7m56.529269567s to StartCluster
	I0930 21:16:15.079639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:16:15.079713   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:16:15.122057   73900 cri.go:89] found id: ""
	I0930 21:16:15.122086   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.122098   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:16:15.122105   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:16:15.122166   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:16:15.156244   73900 cri.go:89] found id: ""
	I0930 21:16:15.156278   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.156289   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:16:15.156297   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:16:15.156357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:16:15.188952   73900 cri.go:89] found id: ""
	I0930 21:16:15.188977   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.188989   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:16:15.188996   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:16:15.189058   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:16:15.219400   73900 cri.go:89] found id: ""
	I0930 21:16:15.219427   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.219435   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:16:15.219441   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:16:15.219501   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:16:15.252049   73900 cri.go:89] found id: ""
	I0930 21:16:15.252078   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.252086   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:16:15.252093   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:16:15.252150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:16:15.286560   73900 cri.go:89] found id: ""
	I0930 21:16:15.286594   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.286605   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:16:15.286614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:16:15.286679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:16:15.319140   73900 cri.go:89] found id: ""
	I0930 21:16:15.319178   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.319187   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:16:15.319192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:16:15.319245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:16:15.351299   73900 cri.go:89] found id: ""
	I0930 21:16:15.351322   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.351330   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:16:15.351339   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:16:15.351350   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:16:15.402837   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:16:15.402882   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:16:15.417111   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:16:15.417140   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:16:15.492593   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:16:15.492614   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:16:15.492627   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:16:15.621646   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:16:15.621681   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0930 21:16:15.660480   73900 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0930 21:16:15.660528   73900 out.go:270] * 
	W0930 21:16:15.660580   73900 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.660595   73900 out.go:270] * 
	W0930 21:16:15.661387   73900 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 21:16:15.665510   73900 out.go:201] 
	W0930 21:16:15.667332   73900 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.667373   73900 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0930 21:16:15.667390   73900 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0930 21:16:15.668812   73900 out.go:201] 
	
	
	==> CRI-O <==
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.842422849Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731364842402349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ad1d4d0-e489-4e1a-8e23-2efc022dca52 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.842916839Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdaf8b80-105d-420f-8eeb-5fcc42fb9991 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.842964105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdaf8b80-105d-420f-8eeb-5fcc42fb9991 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.843468632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d60ed05d46e64cca1db86f00ccacb45d5a95bb26b27d30f7aca439b8cc1cf701,PodSandboxId:d9540a05389856c5ab80763ded59faa352e8d4ff1a56f9942d299d7d9a60b1c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730815045416311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a07a5a12-7420-4b57-b79d-982f4bb48232,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd980ef64f5ee55e937c5a15c5227d17d60838f77fa47ac594729f27a9fd8d7,PodSandboxId:9dda41bfa3440fa3236f74a67cf60d09f954cf82d0411255da69f2d0ed0fda2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814497122204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gt5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165faaf0-866c-4097-9bdb-ed58fe8d7395,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230b9e029d92388fe72b759827e782e4da254c9ace35ca3d3e86be33515cc837,PodSandboxId:17ab0462720101799c02aa044ce3ba13798e980661c2333061d221355749afeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814424548516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sgsbn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
97fdb50-c6a0-4ef8-8c01-ea45ed18b72a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79ac99620cffe88eed23aa8ba0c4f0efba98458aa23a19a8def96edb1a7631f,PodSandboxId:8104984489a3da34604fa4aed4c224abe1ee3d1b218ba5ce5367b3352fbc7b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727730813952552871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glbsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68e378f-ce0f-4603-bd8e-93334f04f7a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a029ecee201160037c5b7802545475ebf57529e8e9145d39aab98a685b790,PodSandboxId:1064ddbe5f838121ecf09f4533a68bd2e9fe23ddd8e1f6e8f50f2c158a18dd5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730803002119690
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb914f0d7e2bbaf31e86346736a6dd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0566d21c749204134a258e8d8ac79e812d7fedb46e3c443b4403df983b45074e,PodSandboxId:6f4729ac569b3abc1e02350ad9d2c41ce5359cbeb2774c905243e1ed0d277402,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730802979
265963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405f938f252475a964680a5d44e32173,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37fa68f2a1951969df50ca55fe27f8a723f04cebab7a4758236d5733c0760cf,PodSandboxId:f0ad3931b0ae76b62980f7e56571ac517f34d9d5b713ab6942a306b61c3a26d7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730802943041907,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e92ecb8a0c3aac853211a7abd5c609e2bb75bd75908851c0c3713a3b66f3d0,PodSandboxId:93f9864dd86bff6d1c24e45c20a6ad995151ba9050eb36db50b15a6f7536fff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730802901379791,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66878a53ff8e421affd026377e49581a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c849c74a929594dad8efc1ce428cad3f9973013c4d91759cdfce50a0da6b92,PodSandboxId:e648124d4d705c3ed22d1e53880b27aa172b6d6f3b701aaf40d04875aad07cbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727730519535964178,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdaf8b80-105d-420f-8eeb-5fcc42fb9991 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.878758173Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3022c9be-bad6-4fd8-ba45-88964bb34c34 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.879050260Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3022c9be-bad6-4fd8-ba45-88964bb34c34 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.880132803Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43a84aa6-1326-4cb0-964e-b42a0026fcb5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.881023158Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731364880570172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43a84aa6-1326-4cb0-964e-b42a0026fcb5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.881412088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e25470a-6032-4865-b9fe-dd9a5a0a338b name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.881480002Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e25470a-6032-4865-b9fe-dd9a5a0a338b name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.881668998Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d60ed05d46e64cca1db86f00ccacb45d5a95bb26b27d30f7aca439b8cc1cf701,PodSandboxId:d9540a05389856c5ab80763ded59faa352e8d4ff1a56f9942d299d7d9a60b1c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730815045416311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a07a5a12-7420-4b57-b79d-982f4bb48232,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd980ef64f5ee55e937c5a15c5227d17d60838f77fa47ac594729f27a9fd8d7,PodSandboxId:9dda41bfa3440fa3236f74a67cf60d09f954cf82d0411255da69f2d0ed0fda2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814497122204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gt5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165faaf0-866c-4097-9bdb-ed58fe8d7395,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230b9e029d92388fe72b759827e782e4da254c9ace35ca3d3e86be33515cc837,PodSandboxId:17ab0462720101799c02aa044ce3ba13798e980661c2333061d221355749afeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814424548516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sgsbn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
97fdb50-c6a0-4ef8-8c01-ea45ed18b72a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79ac99620cffe88eed23aa8ba0c4f0efba98458aa23a19a8def96edb1a7631f,PodSandboxId:8104984489a3da34604fa4aed4c224abe1ee3d1b218ba5ce5367b3352fbc7b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727730813952552871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glbsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68e378f-ce0f-4603-bd8e-93334f04f7a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a029ecee201160037c5b7802545475ebf57529e8e9145d39aab98a685b790,PodSandboxId:1064ddbe5f838121ecf09f4533a68bd2e9fe23ddd8e1f6e8f50f2c158a18dd5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730803002119690
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb914f0d7e2bbaf31e86346736a6dd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0566d21c749204134a258e8d8ac79e812d7fedb46e3c443b4403df983b45074e,PodSandboxId:6f4729ac569b3abc1e02350ad9d2c41ce5359cbeb2774c905243e1ed0d277402,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730802979
265963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405f938f252475a964680a5d44e32173,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37fa68f2a1951969df50ca55fe27f8a723f04cebab7a4758236d5733c0760cf,PodSandboxId:f0ad3931b0ae76b62980f7e56571ac517f34d9d5b713ab6942a306b61c3a26d7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730802943041907,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e92ecb8a0c3aac853211a7abd5c609e2bb75bd75908851c0c3713a3b66f3d0,PodSandboxId:93f9864dd86bff6d1c24e45c20a6ad995151ba9050eb36db50b15a6f7536fff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730802901379791,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66878a53ff8e421affd026377e49581a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c849c74a929594dad8efc1ce428cad3f9973013c4d91759cdfce50a0da6b92,PodSandboxId:e648124d4d705c3ed22d1e53880b27aa172b6d6f3b701aaf40d04875aad07cbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727730519535964178,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e25470a-6032-4865-b9fe-dd9a5a0a338b name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.919658091Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b30aa2c2-9ae4-4a25-ba69-648855f54895 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.919746096Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b30aa2c2-9ae4-4a25-ba69-648855f54895 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.921070339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d64d6d2-6671-4b82-8365-a519eaeff85e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.921760807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731364921693543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d64d6d2-6671-4b82-8365-a519eaeff85e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.922486884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95013c3b-1066-4b97-9712-11a16e132181 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.922551136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95013c3b-1066-4b97-9712-11a16e132181 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.922788408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d60ed05d46e64cca1db86f00ccacb45d5a95bb26b27d30f7aca439b8cc1cf701,PodSandboxId:d9540a05389856c5ab80763ded59faa352e8d4ff1a56f9942d299d7d9a60b1c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730815045416311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a07a5a12-7420-4b57-b79d-982f4bb48232,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd980ef64f5ee55e937c5a15c5227d17d60838f77fa47ac594729f27a9fd8d7,PodSandboxId:9dda41bfa3440fa3236f74a67cf60d09f954cf82d0411255da69f2d0ed0fda2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814497122204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gt5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165faaf0-866c-4097-9bdb-ed58fe8d7395,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230b9e029d92388fe72b759827e782e4da254c9ace35ca3d3e86be33515cc837,PodSandboxId:17ab0462720101799c02aa044ce3ba13798e980661c2333061d221355749afeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814424548516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sgsbn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
97fdb50-c6a0-4ef8-8c01-ea45ed18b72a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79ac99620cffe88eed23aa8ba0c4f0efba98458aa23a19a8def96edb1a7631f,PodSandboxId:8104984489a3da34604fa4aed4c224abe1ee3d1b218ba5ce5367b3352fbc7b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727730813952552871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glbsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68e378f-ce0f-4603-bd8e-93334f04f7a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a029ecee201160037c5b7802545475ebf57529e8e9145d39aab98a685b790,PodSandboxId:1064ddbe5f838121ecf09f4533a68bd2e9fe23ddd8e1f6e8f50f2c158a18dd5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730803002119690
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb914f0d7e2bbaf31e86346736a6dd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0566d21c749204134a258e8d8ac79e812d7fedb46e3c443b4403df983b45074e,PodSandboxId:6f4729ac569b3abc1e02350ad9d2c41ce5359cbeb2774c905243e1ed0d277402,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730802979
265963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405f938f252475a964680a5d44e32173,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37fa68f2a1951969df50ca55fe27f8a723f04cebab7a4758236d5733c0760cf,PodSandboxId:f0ad3931b0ae76b62980f7e56571ac517f34d9d5b713ab6942a306b61c3a26d7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730802943041907,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e92ecb8a0c3aac853211a7abd5c609e2bb75bd75908851c0c3713a3b66f3d0,PodSandboxId:93f9864dd86bff6d1c24e45c20a6ad995151ba9050eb36db50b15a6f7536fff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730802901379791,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66878a53ff8e421affd026377e49581a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c849c74a929594dad8efc1ce428cad3f9973013c4d91759cdfce50a0da6b92,PodSandboxId:e648124d4d705c3ed22d1e53880b27aa172b6d6f3b701aaf40d04875aad07cbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727730519535964178,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95013c3b-1066-4b97-9712-11a16e132181 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.958247865Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c40f6724-e71d-4a2e-87dd-1cb0ef3ba737 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.958320815Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c40f6724-e71d-4a2e-87dd-1cb0ef3ba737 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.959539223Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=049b7308-8789-48f1-8b8f-9de25ca4933e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.960034871Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731364960009588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=049b7308-8789-48f1-8b8f-9de25ca4933e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.960767056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ffdcaa5-0f1c-415c-863e-f8fd6cb926cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.960858851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ffdcaa5-0f1c-415c-863e-f8fd6cb926cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:22:44 embed-certs-256103 crio[700]: time="2024-09-30 21:22:44.961072755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d60ed05d46e64cca1db86f00ccacb45d5a95bb26b27d30f7aca439b8cc1cf701,PodSandboxId:d9540a05389856c5ab80763ded59faa352e8d4ff1a56f9942d299d7d9a60b1c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730815045416311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a07a5a12-7420-4b57-b79d-982f4bb48232,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd980ef64f5ee55e937c5a15c5227d17d60838f77fa47ac594729f27a9fd8d7,PodSandboxId:9dda41bfa3440fa3236f74a67cf60d09f954cf82d0411255da69f2d0ed0fda2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814497122204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gt5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165faaf0-866c-4097-9bdb-ed58fe8d7395,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230b9e029d92388fe72b759827e782e4da254c9ace35ca3d3e86be33515cc837,PodSandboxId:17ab0462720101799c02aa044ce3ba13798e980661c2333061d221355749afeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814424548516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sgsbn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
97fdb50-c6a0-4ef8-8c01-ea45ed18b72a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79ac99620cffe88eed23aa8ba0c4f0efba98458aa23a19a8def96edb1a7631f,PodSandboxId:8104984489a3da34604fa4aed4c224abe1ee3d1b218ba5ce5367b3352fbc7b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727730813952552871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glbsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68e378f-ce0f-4603-bd8e-93334f04f7a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a029ecee201160037c5b7802545475ebf57529e8e9145d39aab98a685b790,PodSandboxId:1064ddbe5f838121ecf09f4533a68bd2e9fe23ddd8e1f6e8f50f2c158a18dd5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730803002119690
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb914f0d7e2bbaf31e86346736a6dd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0566d21c749204134a258e8d8ac79e812d7fedb46e3c443b4403df983b45074e,PodSandboxId:6f4729ac569b3abc1e02350ad9d2c41ce5359cbeb2774c905243e1ed0d277402,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730802979
265963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405f938f252475a964680a5d44e32173,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37fa68f2a1951969df50ca55fe27f8a723f04cebab7a4758236d5733c0760cf,PodSandboxId:f0ad3931b0ae76b62980f7e56571ac517f34d9d5b713ab6942a306b61c3a26d7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730802943041907,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e92ecb8a0c3aac853211a7abd5c609e2bb75bd75908851c0c3713a3b66f3d0,PodSandboxId:93f9864dd86bff6d1c24e45c20a6ad995151ba9050eb36db50b15a6f7536fff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730802901379791,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66878a53ff8e421affd026377e49581a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c849c74a929594dad8efc1ce428cad3f9973013c4d91759cdfce50a0da6b92,PodSandboxId:e648124d4d705c3ed22d1e53880b27aa172b6d6f3b701aaf40d04875aad07cbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727730519535964178,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ffdcaa5-0f1c-415c-863e-f8fd6cb926cc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d60ed05d46e64       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   d9540a0538985       storage-provisioner
	4bd980ef64f5e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   9dda41bfa3440       coredns-7c65d6cfc9-gt5tt
	230b9e029d923       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   17ab046272010       coredns-7c65d6cfc9-sgsbn
	b79ac99620cff       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   8104984489a3d       kube-proxy-glbsg
	499a029ecee20       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   1064ddbe5f838       kube-controller-manager-embed-certs-256103
	0566d21c74920       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   6f4729ac569b3       kube-scheduler-embed-certs-256103
	e37fa68f2a195       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   f0ad3931b0ae7       kube-apiserver-embed-certs-256103
	47e92ecb8a0c3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   93f9864dd86bf       etcd-embed-certs-256103
	c7c849c74a929       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   e648124d4d705       kube-apiserver-embed-certs-256103
	
	
	==> coredns [230b9e029d92388fe72b759827e782e4da254c9ace35ca3d3e86be33515cc837] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [4bd980ef64f5ee55e937c5a15c5227d17d60838f77fa47ac594729f27a9fd8d7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-256103
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-256103
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=embed-certs-256103
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T21_13_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 21:13:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-256103
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 21:22:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 21:18:44 +0000   Mon, 30 Sep 2024 21:13:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 21:18:44 +0000   Mon, 30 Sep 2024 21:13:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 21:18:44 +0000   Mon, 30 Sep 2024 21:13:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 21:18:44 +0000   Mon, 30 Sep 2024 21:13:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.90
	  Hostname:    embed-certs-256103
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 069094f552e54029b7b56481eecb511b
	  System UUID:                069094f5-52e5-4029-b7b5-6481eecb511b
	  Boot ID:                    6b70f5e5-835e-4ab7-b9c6-cdf339ee44dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gt5tt                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-7c65d6cfc9-sgsbn                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-embed-certs-256103                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-embed-certs-256103             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-embed-certs-256103    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-glbsg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-embed-certs-256103             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-5mhkh               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node embed-certs-256103 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node embed-certs-256103 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node embed-certs-256103 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node embed-certs-256103 event: Registered Node embed-certs-256103 in Controller
	
	
	==> dmesg <==
	[  +0.053525] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042751] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.143984] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968643] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.569015] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.123461] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.068083] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054663] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.196500] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.111848] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.269488] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[  +4.058827] systemd-fstab-generator[783]: Ignoring "noauto" option for root device
	[  +1.866348] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +0.080396] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.531169] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.299145] kauditd_printk_skb: 85 callbacks suppressed
	[Sep30 21:13] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.179514] systemd-fstab-generator[2556]: Ignoring "noauto" option for root device
	[  +4.574300] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.474913] systemd-fstab-generator[2879]: Ignoring "noauto" option for root device
	[  +5.363641] systemd-fstab-generator[2989]: Ignoring "noauto" option for root device
	[  +0.116529] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.311974] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [47e92ecb8a0c3aac853211a7abd5c609e2bb75bd75908851c0c3713a3b66f3d0] <==
	{"level":"info","ts":"2024-09-30T21:13:23.238683Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-30T21:13:23.241626Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8d381aaacda0b9bd","initial-advertise-peer-urls":["https://192.168.39.90:2380"],"listen-peer-urls":["https://192.168.39.90:2380"],"advertise-client-urls":["https://192.168.39.90:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.90:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-30T21:13:23.242414Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-30T21:13:23.238845Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.90:2380"}
	{"level":"info","ts":"2024-09-30T21:13:23.244859Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.90:2380"}
	{"level":"info","ts":"2024-09-30T21:13:23.774879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-30T21:13:23.774975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-30T21:13:23.775038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd received MsgPreVoteResp from 8d381aaacda0b9bd at term 1"}
	{"level":"info","ts":"2024-09-30T21:13:23.775078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd became candidate at term 2"}
	{"level":"info","ts":"2024-09-30T21:13:23.775135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd received MsgVoteResp from 8d381aaacda0b9bd at term 2"}
	{"level":"info","ts":"2024-09-30T21:13:23.775146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd became leader at term 2"}
	{"level":"info","ts":"2024-09-30T21:13:23.775153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8d381aaacda0b9bd elected leader 8d381aaacda0b9bd at term 2"}
	{"level":"info","ts":"2024-09-30T21:13:23.780197Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8d381aaacda0b9bd","local-member-attributes":"{Name:embed-certs-256103 ClientURLs:[https://192.168.39.90:2379]}","request-path":"/0/members/8d381aaacda0b9bd/attributes","cluster-id":"8cf3a1558a63fa9e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T21:13:23.780282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T21:13:23.780373Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T21:13:23.783390Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T21:13:23.784883Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T21:13:23.799381Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T21:13:23.799419Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T21:13:23.799514Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8cf3a1558a63fa9e","local-member-id":"8d381aaacda0b9bd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T21:13:23.802981Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T21:13:23.803760Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.90:2379"}
	{"level":"info","ts":"2024-09-30T21:13:23.805349Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T21:13:23.805614Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T21:13:23.816610Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:22:45 up 14 min,  0 users,  load average: 0.16, 0.24, 0.19
	Linux embed-certs-256103 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c7c849c74a929594dad8efc1ce428cad3f9973013c4d91759cdfce50a0da6b92] <==
	W0930 21:13:19.481084       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.542030       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.571452       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.574854       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.626591       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.741079       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.745515       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.775677       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.775990       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.785400       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.799269       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.811074       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.849230       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.854691       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.889900       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.039620       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.099688       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.103199       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.112709       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.211231       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.307726       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.330453       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.390497       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.481076       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.489947       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e37fa68f2a1951969df50ca55fe27f8a723f04cebab7a4758236d5733c0760cf] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0930 21:18:26.516566       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:18:26.516624       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0930 21:18:26.517712       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:18:26.517897       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 21:19:26.519051       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:19:26.519362       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0930 21:19:26.519113       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:19:26.519534       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0930 21:19:26.520728       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:19:26.520798       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 21:21:26.520945       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:21:26.521282       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0930 21:21:26.521350       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:21:26.521444       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0930 21:21:26.522529       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:21:26.522602       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [499a029ecee201160037c5b7802545475ebf57529e8e9145d39aab98a685b790] <==
	E0930 21:17:32.568657       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:17:33.017283       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:18:02.576359       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:18:03.026049       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:18:32.584734       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:18:33.036941       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0930 21:18:44.762608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-256103"
	E0930 21:19:02.591763       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:19:03.044565       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:19:32.598140       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:19:33.053668       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0930 21:19:35.117798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="214.653µs"
	I0930 21:19:50.119401       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="143.541µs"
	E0930 21:20:02.604175       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:20:03.061585       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:20:32.611144       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:20:33.069115       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:21:02.616549       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:21:03.077698       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:21:32.624372       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:21:33.085728       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:22:02.630104       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:22:03.094179       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:22:32.636707       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:22:33.102364       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b79ac99620cffe88eed23aa8ba0c4f0efba98458aa23a19a8def96edb1a7631f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 21:13:34.692949       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 21:13:34.739604       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.90"]
	E0930 21:13:34.739712       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 21:13:34.964541       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 21:13:34.964597       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 21:13:34.964626       1 server_linux.go:169] "Using iptables Proxier"
	I0930 21:13:34.969048       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 21:13:34.969368       1 server.go:483] "Version info" version="v1.31.1"
	I0930 21:13:34.969410       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 21:13:34.971670       1 config.go:199] "Starting service config controller"
	I0930 21:13:34.971757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 21:13:34.971856       1 config.go:105] "Starting endpoint slice config controller"
	I0930 21:13:34.971879       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 21:13:34.972423       1 config.go:328] "Starting node config controller"
	I0930 21:13:34.975846       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 21:13:35.072454       1 shared_informer.go:320] Caches are synced for service config
	I0930 21:13:35.072550       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 21:13:35.076301       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0566d21c749204134a258e8d8ac79e812d7fedb46e3c443b4403df983b45074e] <==
	W0930 21:13:25.628460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 21:13:25.631122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:25.633862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 21:13:25.633943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:25.636114       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 21:13:25.636211       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 21:13:26.539515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 21:13:26.539715       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.556968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0930 21:13:26.557013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.580495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 21:13:26.581015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.606968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 21:13:26.607019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.621661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 21:13:26.621734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.690633       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0930 21:13:26.690677       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.806531       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 21:13:26.806601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.889148       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 21:13:26.889204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.912912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 21:13:26.912957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 21:13:27.325934       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 21:21:33 embed-certs-256103 kubelet[2886]: E0930 21:21:33.100031    2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5mhkh" podUID="470424ec-bb66-4d62-904d-0d4ad93fa5bf"
	Sep 30 21:21:38 embed-certs-256103 kubelet[2886]: E0930 21:21:38.253618    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731298253347323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:38 embed-certs-256103 kubelet[2886]: E0930 21:21:38.253644    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731298253347323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:48 embed-certs-256103 kubelet[2886]: E0930 21:21:48.101375    2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5mhkh" podUID="470424ec-bb66-4d62-904d-0d4ad93fa5bf"
	Sep 30 21:21:48 embed-certs-256103 kubelet[2886]: E0930 21:21:48.258361    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731308256712216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:48 embed-certs-256103 kubelet[2886]: E0930 21:21:48.258413    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731308256712216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:58 embed-certs-256103 kubelet[2886]: E0930 21:21:58.261082    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731318260617159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:21:58 embed-certs-256103 kubelet[2886]: E0930 21:21:58.261124    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731318260617159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:22:02 embed-certs-256103 kubelet[2886]: E0930 21:22:02.100645    2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5mhkh" podUID="470424ec-bb66-4d62-904d-0d4ad93fa5bf"
	Sep 30 21:22:08 embed-certs-256103 kubelet[2886]: E0930 21:22:08.263012    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731328262706240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:22:08 embed-certs-256103 kubelet[2886]: E0930 21:22:08.263054    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731328262706240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:22:13 embed-certs-256103 kubelet[2886]: E0930 21:22:13.099761    2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5mhkh" podUID="470424ec-bb66-4d62-904d-0d4ad93fa5bf"
	Sep 30 21:22:18 embed-certs-256103 kubelet[2886]: E0930 21:22:18.264679    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731338264334573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:22:18 embed-certs-256103 kubelet[2886]: E0930 21:22:18.265034    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731338264334573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:22:26 embed-certs-256103 kubelet[2886]: E0930 21:22:26.102048    2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5mhkh" podUID="470424ec-bb66-4d62-904d-0d4ad93fa5bf"
	Sep 30 21:22:28 embed-certs-256103 kubelet[2886]: E0930 21:22:28.123026    2886 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 21:22:28 embed-certs-256103 kubelet[2886]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 21:22:28 embed-certs-256103 kubelet[2886]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 21:22:28 embed-certs-256103 kubelet[2886]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 21:22:28 embed-certs-256103 kubelet[2886]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 21:22:28 embed-certs-256103 kubelet[2886]: E0930 21:22:28.266959    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731348266353454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:22:28 embed-certs-256103 kubelet[2886]: E0930 21:22:28.267007    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731348266353454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:22:38 embed-certs-256103 kubelet[2886]: E0930 21:22:38.269343    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731358268738620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:22:38 embed-certs-256103 kubelet[2886]: E0930 21:22:38.269385    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731358268738620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:22:41 embed-certs-256103 kubelet[2886]: E0930 21:22:41.100104    2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5mhkh" podUID="470424ec-bb66-4d62-904d-0d4ad93fa5bf"
	
	
	==> storage-provisioner [d60ed05d46e64cca1db86f00ccacb45d5a95bb26b27d30f7aca439b8cc1cf701] <==
	I0930 21:13:35.169001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 21:13:35.203240       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 21:13:35.203310       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 21:13:35.239188       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 21:13:35.239352       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-256103_8a7d20c6-199a-4fca-a63b-d33200502e8e!
	I0930 21:13:35.244638       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"94d1c1b3-3132-464e-ae13-9d6b20a67810", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-256103_8a7d20c6-199a-4fca-a63b-d33200502e8e became leader
	I0930 21:13:35.339911       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-256103_8a7d20c6-199a-4fca-a63b-d33200502e8e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256103 -n embed-certs-256103
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-256103 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-5mhkh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-256103 describe pod metrics-server-6867b74b74-5mhkh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-256103 describe pod metrics-server-6867b74b74-5mhkh: exit status 1 (59.944451ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-5mhkh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-256103 describe pod metrics-server-6867b74b74-5mhkh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:16:34.591228   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:16:55.760112   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:17:15.349041   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:17:51.838499   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:17:57.654764   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:18:08.419328   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:18:15.484933   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:18:18.822219   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:18:28.935755   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:18:58.383779   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:19:14.904806   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:19:31.483581   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:19:31.997806   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:19:38.547466   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:20:52.286549   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:20:55.310744   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:21:34.590498   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:21:55.759549   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:22:51.838677   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:23:08.419366   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:23:15.484940   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:23:28.936339   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:24:31.997466   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-621406 -n old-k8s-version-621406
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-621406 -n old-k8s-version-621406: exit status 2 (228.062301ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-621406" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406: exit status 2 (219.534977ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-621406 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-621406 logs -n 25: (1.703015648s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo find                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo crio                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-207733                                      | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-741890 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | disable-driver-mounts-741890                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 21:00 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-256103            | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-997816             | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-291511  | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-621406        | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256103                 | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-997816                  | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-291511       | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:12 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-621406             | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 21:03:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 21:03:42.750102   73900 out.go:345] Setting OutFile to fd 1 ...
	I0930 21:03:42.750367   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750377   73900 out.go:358] Setting ErrFile to fd 2...
	I0930 21:03:42.750383   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750578   73900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 21:03:42.751109   73900 out.go:352] Setting JSON to false
	I0930 21:03:42.752040   73900 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6366,"bootTime":1727723857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 21:03:42.752140   73900 start.go:139] virtualization: kvm guest
	I0930 21:03:42.754146   73900 out.go:177] * [old-k8s-version-621406] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 21:03:42.755446   73900 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 21:03:42.755456   73900 notify.go:220] Checking for updates...
	I0930 21:03:42.758261   73900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 21:03:42.759566   73900 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:03:42.760907   73900 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 21:03:42.762342   73900 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 21:03:42.763561   73900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 21:03:42.765356   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:03:42.765773   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.765822   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.780605   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45071
	I0930 21:03:42.781022   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.781550   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.781583   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.781912   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.782160   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.784603   73900 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 21:03:42.785760   73900 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 21:03:42.786115   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.786156   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.800937   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0930 21:03:42.801409   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.801882   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.801905   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.802216   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.802397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.838423   73900 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 21:03:42.839832   73900 start.go:297] selected driver: kvm2
	I0930 21:03:42.839847   73900 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.839953   73900 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 21:03:42.840605   73900 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.840667   73900 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 21:03:42.856119   73900 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 21:03:42.856550   73900 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:03:42.856580   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:03:42.856630   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:03:42.856665   73900 start.go:340] cluster config:
	{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.856778   73900 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.858732   73900 out.go:177] * Starting "old-k8s-version-621406" primary control-plane node in "old-k8s-version-621406" cluster
	I0930 21:03:42.859876   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:03:42.859912   73900 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 21:03:42.859929   73900 cache.go:56] Caching tarball of preloaded images
	I0930 21:03:42.860020   73900 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 21:03:42.860031   73900 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0930 21:03:42.860153   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:03:42.860340   73900 start.go:360] acquireMachinesLock for old-k8s-version-621406: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:03:44.619810   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:47.691872   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:53.771838   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:56.843848   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:02.923822   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:05.995871   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:12.075814   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:15.147854   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:21.227790   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:24.299842   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:30.379801   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:33.451787   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:39.531808   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:42.603838   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:48.683904   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:51.755939   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:57.835834   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:00.907789   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:06.987875   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:10.059892   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:16.139832   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:19.211908   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:25.291812   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:28.363915   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:34.443827   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:37.515928   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:43.595824   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:46.667934   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:52.747851   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:55.819883   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:01.899789   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:04.971946   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:11.051812   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:14.123833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:20.203805   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:23.275875   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:29.355806   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:32.427931   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:38.507837   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:41.579909   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:47.659786   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:50.731827   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:56.811833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:59.883878   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:05.963833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:09.035828   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:12.040058   73375 start.go:364] duration metric: took 4m26.951572628s to acquireMachinesLock for "no-preload-997816"
	I0930 21:07:12.040115   73375 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:12.040126   73375 fix.go:54] fixHost starting: 
	I0930 21:07:12.040448   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:12.040485   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:12.057054   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0930 21:07:12.057624   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:12.058143   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:12.058173   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:12.058523   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:12.058739   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:12.058873   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:12.060479   73375 fix.go:112] recreateIfNeeded on no-preload-997816: state=Stopped err=<nil>
	I0930 21:07:12.060499   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	W0930 21:07:12.060640   73375 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:12.062653   73375 out.go:177] * Restarting existing kvm2 VM for "no-preload-997816" ...
	I0930 21:07:12.037683   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:12.037732   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:07:12.038031   73256 buildroot.go:166] provisioning hostname "embed-certs-256103"
	I0930 21:07:12.038055   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:07:12.038234   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:07:12.039910   73256 machine.go:96] duration metric: took 4m37.42208497s to provisionDockerMachine
	I0930 21:07:12.039954   73256 fix.go:56] duration metric: took 4m37.444804798s for fixHost
	I0930 21:07:12.039962   73256 start.go:83] releasing machines lock for "embed-certs-256103", held for 4m37.444833727s
	W0930 21:07:12.039989   73256 start.go:714] error starting host: provision: host is not running
	W0930 21:07:12.040104   73256 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0930 21:07:12.040116   73256 start.go:729] Will try again in 5 seconds ...
	I0930 21:07:12.063941   73375 main.go:141] libmachine: (no-preload-997816) Calling .Start
	I0930 21:07:12.064167   73375 main.go:141] libmachine: (no-preload-997816) Ensuring networks are active...
	I0930 21:07:12.065080   73375 main.go:141] libmachine: (no-preload-997816) Ensuring network default is active
	I0930 21:07:12.065489   73375 main.go:141] libmachine: (no-preload-997816) Ensuring network mk-no-preload-997816 is active
	I0930 21:07:12.065993   73375 main.go:141] libmachine: (no-preload-997816) Getting domain xml...
	I0930 21:07:12.066923   73375 main.go:141] libmachine: (no-preload-997816) Creating domain...
	I0930 21:07:13.297091   73375 main.go:141] libmachine: (no-preload-997816) Waiting to get IP...
	I0930 21:07:13.297965   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.298386   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.298473   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.298370   74631 retry.go:31] will retry after 312.032565ms: waiting for machine to come up
	I0930 21:07:13.612088   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.612583   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.612607   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.612519   74631 retry.go:31] will retry after 292.985742ms: waiting for machine to come up
	I0930 21:07:13.907355   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.907794   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.907817   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.907754   74631 retry.go:31] will retry after 451.618632ms: waiting for machine to come up
	I0930 21:07:14.361536   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:14.361990   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:14.362054   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:14.361947   74631 retry.go:31] will retry after 599.246635ms: waiting for machine to come up
	I0930 21:07:14.962861   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:14.963341   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:14.963369   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:14.963294   74631 retry.go:31] will retry after 748.726096ms: waiting for machine to come up
	I0930 21:07:17.040758   73256 start.go:360] acquireMachinesLock for embed-certs-256103: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:07:15.713258   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:15.713576   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:15.713601   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:15.713525   74631 retry.go:31] will retry after 907.199669ms: waiting for machine to come up
	I0930 21:07:16.622784   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:16.623275   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:16.623307   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:16.623211   74631 retry.go:31] will retry after 744.978665ms: waiting for machine to come up
	I0930 21:07:17.369735   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:17.370206   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:17.370231   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:17.370154   74631 retry.go:31] will retry after 1.238609703s: waiting for machine to come up
	I0930 21:07:18.610618   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:18.610967   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:18.610989   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:18.610928   74631 retry.go:31] will retry after 1.354775356s: waiting for machine to come up
	I0930 21:07:19.967473   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:19.967892   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:19.967916   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:19.967851   74631 retry.go:31] will retry after 2.26449082s: waiting for machine to come up
	I0930 21:07:22.234066   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:22.234514   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:22.234536   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:22.234474   74631 retry.go:31] will retry after 2.728158374s: waiting for machine to come up
	I0930 21:07:24.966375   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:24.966759   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:24.966782   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:24.966724   74631 retry.go:31] will retry after 3.119117729s: waiting for machine to come up
	I0930 21:07:29.336238   73707 start.go:364] duration metric: took 3m58.92874513s to acquireMachinesLock for "default-k8s-diff-port-291511"
	I0930 21:07:29.336327   73707 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:29.336347   73707 fix.go:54] fixHost starting: 
	I0930 21:07:29.336726   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:29.336779   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:29.354404   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0930 21:07:29.354848   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:29.355331   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:07:29.355352   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:29.355882   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:29.356081   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:29.356249   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:07:29.358109   73707 fix.go:112] recreateIfNeeded on default-k8s-diff-port-291511: state=Stopped err=<nil>
	I0930 21:07:29.358155   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	W0930 21:07:29.358336   73707 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:29.361072   73707 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-291511" ...
	I0930 21:07:28.087153   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.087604   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has current primary IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.087636   73375 main.go:141] libmachine: (no-preload-997816) Found IP for machine: 192.168.61.93
	I0930 21:07:28.087644   73375 main.go:141] libmachine: (no-preload-997816) Reserving static IP address...
	I0930 21:07:28.088047   73375 main.go:141] libmachine: (no-preload-997816) Reserved static IP address: 192.168.61.93
	I0930 21:07:28.088068   73375 main.go:141] libmachine: (no-preload-997816) Waiting for SSH to be available...
	I0930 21:07:28.088090   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "no-preload-997816", mac: "52:54:00:cb:3d:73", ip: "192.168.61.93"} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.088158   73375 main.go:141] libmachine: (no-preload-997816) DBG | skip adding static IP to network mk-no-preload-997816 - found existing host DHCP lease matching {name: "no-preload-997816", mac: "52:54:00:cb:3d:73", ip: "192.168.61.93"}
	I0930 21:07:28.088181   73375 main.go:141] libmachine: (no-preload-997816) DBG | Getting to WaitForSSH function...
	I0930 21:07:28.090195   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.090522   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.090547   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.090722   73375 main.go:141] libmachine: (no-preload-997816) DBG | Using SSH client type: external
	I0930 21:07:28.090739   73375 main.go:141] libmachine: (no-preload-997816) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa (-rw-------)
	I0930 21:07:28.090767   73375 main.go:141] libmachine: (no-preload-997816) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.93 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:07:28.090787   73375 main.go:141] libmachine: (no-preload-997816) DBG | About to run SSH command:
	I0930 21:07:28.090801   73375 main.go:141] libmachine: (no-preload-997816) DBG | exit 0
	I0930 21:07:28.211669   73375 main.go:141] libmachine: (no-preload-997816) DBG | SSH cmd err, output: <nil>: 
	I0930 21:07:28.212073   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetConfigRaw
	I0930 21:07:28.212714   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:28.215442   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.215934   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.215951   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.216186   73375 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/config.json ...
	I0930 21:07:28.216370   73375 machine.go:93] provisionDockerMachine start ...
	I0930 21:07:28.216386   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:28.216575   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.218963   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.219423   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.219455   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.219604   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.219770   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.219948   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.220057   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.220252   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.220441   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.220452   73375 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:07:28.315814   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:07:28.315853   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.316131   73375 buildroot.go:166] provisioning hostname "no-preload-997816"
	I0930 21:07:28.316161   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.316372   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.319253   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.319506   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.319548   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.319711   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.319903   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.320057   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.320182   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.320383   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.320592   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.320606   73375 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-997816 && echo "no-preload-997816" | sudo tee /etc/hostname
	I0930 21:07:28.433652   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-997816
	
	I0930 21:07:28.433686   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.436989   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.437350   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.437389   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.437611   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.437784   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.437957   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.438075   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.438267   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.438487   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.438512   73375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-997816' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-997816/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-997816' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:07:28.544056   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:28.544088   73375 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:07:28.544112   73375 buildroot.go:174] setting up certificates
	I0930 21:07:28.544122   73375 provision.go:84] configureAuth start
	I0930 21:07:28.544135   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.544418   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:28.546960   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.547363   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.547384   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.547570   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.549918   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.550325   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.550353   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.550535   73375 provision.go:143] copyHostCerts
	I0930 21:07:28.550612   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:07:28.550627   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:07:28.550711   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:07:28.550804   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:07:28.550812   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:07:28.550837   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:07:28.550893   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:07:28.550900   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:07:28.550920   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:07:28.550967   73375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.no-preload-997816 san=[127.0.0.1 192.168.61.93 localhost minikube no-preload-997816]
	I0930 21:07:28.744306   73375 provision.go:177] copyRemoteCerts
	I0930 21:07:28.744364   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:07:28.744386   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.747024   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.747368   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.747401   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.747615   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.747813   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.747973   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.748133   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:28.825616   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0930 21:07:28.849513   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:07:28.872666   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:07:28.895673   73375 provision.go:87] duration metric: took 351.536833ms to configureAuth
	I0930 21:07:28.895708   73375 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:07:28.895896   73375 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:28.895975   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.898667   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.899067   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.899098   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.899324   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.899567   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.899703   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.899829   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.899946   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.900120   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.900134   73375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:07:29.113855   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:07:29.113877   73375 machine.go:96] duration metric: took 897.495238ms to provisionDockerMachine
	I0930 21:07:29.113887   73375 start.go:293] postStartSetup for "no-preload-997816" (driver="kvm2")
	I0930 21:07:29.113897   73375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:07:29.113921   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.114220   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:07:29.114254   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.117274   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.117619   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.117663   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.117816   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.118010   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.118159   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.118289   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.197962   73375 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:07:29.202135   73375 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:07:29.202166   73375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:07:29.202237   73375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:07:29.202321   73375 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:07:29.202406   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:07:29.211693   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:29.234503   73375 start.go:296] duration metric: took 120.601484ms for postStartSetup
	I0930 21:07:29.234582   73375 fix.go:56] duration metric: took 17.194433455s for fixHost
	I0930 21:07:29.234610   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.237134   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.237544   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.237574   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.237728   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.237912   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.238085   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.238199   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.238348   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:29.238506   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:29.238515   73375 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:07:29.336092   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730449.310327649
	
	I0930 21:07:29.336114   73375 fix.go:216] guest clock: 1727730449.310327649
	I0930 21:07:29.336123   73375 fix.go:229] Guest: 2024-09-30 21:07:29.310327649 +0000 UTC Remote: 2024-09-30 21:07:29.234588814 +0000 UTC m=+284.288095935 (delta=75.738835ms)
	I0930 21:07:29.336147   73375 fix.go:200] guest clock delta is within tolerance: 75.738835ms
	I0930 21:07:29.336153   73375 start.go:83] releasing machines lock for "no-preload-997816", held for 17.296055752s
	I0930 21:07:29.336194   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.336478   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:29.339488   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.339864   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.339909   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.340070   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340525   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340697   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340800   73375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:07:29.340836   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.340930   73375 ssh_runner.go:195] Run: cat /version.json
	I0930 21:07:29.340955   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.343579   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.343941   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.343976   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344010   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344228   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.344405   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.344441   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.344471   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344543   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.344616   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.344689   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.344784   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.344966   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.345105   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.420949   73375 ssh_runner.go:195] Run: systemctl --version
	I0930 21:07:29.465854   73375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:07:29.616360   73375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:07:29.624522   73375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:07:29.624604   73375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:07:29.642176   73375 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:07:29.642202   73375 start.go:495] detecting cgroup driver to use...
	I0930 21:07:29.642279   73375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:07:29.657878   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:07:29.674555   73375 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:07:29.674614   73375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:07:29.690953   73375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:07:29.705425   73375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:07:29.814602   73375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:07:29.957009   73375 docker.go:233] disabling docker service ...
	I0930 21:07:29.957091   73375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:07:29.971419   73375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:07:29.362775   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Start
	I0930 21:07:29.363023   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring networks are active...
	I0930 21:07:29.364071   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring network default is active
	I0930 21:07:29.364456   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring network mk-default-k8s-diff-port-291511 is active
	I0930 21:07:29.364940   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Getting domain xml...
	I0930 21:07:29.365759   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Creating domain...
	I0930 21:07:29.987509   73375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:07:30.112952   73375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:07:30.239945   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:07:30.253298   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:07:30.271687   73375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:07:30.271768   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.282267   73375 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:07:30.282339   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.292776   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.303893   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.315002   73375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:07:30.326410   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.336951   73375 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.356016   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.367847   73375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:07:30.378650   73375 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:07:30.378703   73375 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:07:30.391768   73375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:07:30.401887   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:30.534771   73375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:07:30.622017   73375 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:07:30.622087   73375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:07:30.627221   73375 start.go:563] Will wait 60s for crictl version
	I0930 21:07:30.627294   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:30.633071   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:07:30.675743   73375 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:07:30.675830   73375 ssh_runner.go:195] Run: crio --version
	I0930 21:07:30.703470   73375 ssh_runner.go:195] Run: crio --version
	I0930 21:07:30.732424   73375 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:07:30.733714   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:30.737016   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:30.737380   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:30.737421   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:30.737690   73375 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0930 21:07:30.741714   73375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:30.754767   73375 kubeadm.go:883] updating cluster {Name:no-preload-997816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:07:30.754892   73375 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:07:30.754941   73375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:30.794489   73375 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:07:30.794516   73375 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 21:07:30.794605   73375 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:30.794624   73375 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:30.794653   73375 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:30.794694   73375 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:30.794733   73375 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:30.794691   73375 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:30.794822   73375 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:30.794836   73375 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0930 21:07:30.796508   73375 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:30.796521   73375 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:30.796538   73375 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:30.796543   73375 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:30.796610   73375 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:30.796616   73375 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:30.796611   73375 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0930 21:07:30.796665   73375 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.018683   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0930 21:07:31.028097   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.117252   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.131998   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.136871   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.140418   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.170883   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.171059   73375 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0930 21:07:31.171098   73375 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.171142   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.172908   73375 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0930 21:07:31.172951   73375 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.172994   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.242489   73375 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0930 21:07:31.242541   73375 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.242609   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.246685   73375 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0930 21:07:31.246731   73375 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.246758   73375 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0930 21:07:31.246778   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.246794   73375 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.246837   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.270923   73375 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0930 21:07:31.270971   73375 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.271024   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.271030   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.271100   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.271109   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.271207   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.271269   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.387993   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.388011   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.388044   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.388091   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.388150   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.388230   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.523098   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.523156   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.523300   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.523344   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.523467   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.623696   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0930 21:07:31.623759   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.623778   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0930 21:07:31.623794   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:31.623869   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:31.632927   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0930 21:07:31.633014   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.633117   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.633206   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0930 21:07:31.633269   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:31.648925   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0930 21:07:31.648945   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.648983   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.676886   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0930 21:07:31.676925   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0930 21:07:31.709210   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0930 21:07:31.709287   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0930 21:07:31.709331   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:31.709394   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:31.709330   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0930 21:07:32.112418   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:33.634620   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.985614953s)
	I0930 21:07:33.634656   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0930 21:07:33.634702   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.925342294s)
	I0930 21:07:33.634716   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:33.634731   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0930 21:07:33.634771   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.925359685s)
	I0930 21:07:33.634779   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:33.634782   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0930 21:07:33.634853   73375 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.522405881s)
	I0930 21:07:33.634891   73375 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0930 21:07:33.634913   73375 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:33.634961   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:30.643828   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting to get IP...
	I0930 21:07:30.644936   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.645382   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.645484   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:30.645381   74769 retry.go:31] will retry after 216.832119ms: waiting for machine to come up
	I0930 21:07:30.863953   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.864583   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.864614   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:30.864518   74769 retry.go:31] will retry after 280.448443ms: waiting for machine to come up
	I0930 21:07:31.147184   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.147792   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.147826   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.147728   74769 retry.go:31] will retry after 345.517763ms: waiting for machine to come up
	I0930 21:07:31.495391   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.495819   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.495841   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.495786   74769 retry.go:31] will retry after 457.679924ms: waiting for machine to come up
	I0930 21:07:31.955479   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.955943   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.955974   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.955897   74769 retry.go:31] will retry after 562.95605ms: waiting for machine to come up
	I0930 21:07:32.520890   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:32.521339   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:32.521368   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:32.521285   74769 retry.go:31] will retry after 743.560182ms: waiting for machine to come up
	I0930 21:07:33.266407   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:33.266914   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:33.266941   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:33.266853   74769 retry.go:31] will retry after 947.444427ms: waiting for machine to come up
	I0930 21:07:34.216195   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:34.216705   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:34.216731   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:34.216659   74769 retry.go:31] will retry after 1.186059526s: waiting for machine to come up
	I0930 21:07:35.714633   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.079826486s)
	I0930 21:07:35.714667   73375 ssh_runner.go:235] Completed: which crictl: (2.079690884s)
	I0930 21:07:35.714721   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:35.714670   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0930 21:07:35.714786   73375 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:35.714821   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:35.753242   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:39.088354   73375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.335055656s)
	I0930 21:07:39.088395   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.373547177s)
	I0930 21:07:39.088422   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0930 21:07:39.088458   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:39.088536   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:39.088459   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:35.404773   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:35.405334   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:35.405359   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:35.405225   74769 retry.go:31] will retry after 1.575803783s: waiting for machine to come up
	I0930 21:07:36.983196   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:36.983730   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:36.983759   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:36.983677   74769 retry.go:31] will retry after 2.020561586s: waiting for machine to come up
	I0930 21:07:39.006915   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:39.007304   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:39.007334   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:39.007269   74769 retry.go:31] will retry after 2.801421878s: waiting for machine to come up
	I0930 21:07:41.074012   73375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.985398095s)
	I0930 21:07:41.074061   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0930 21:07:41.074154   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.985588774s)
	I0930 21:07:41.074183   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0930 21:07:41.074202   73375 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:41.074244   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:41.074166   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:42.972016   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.897745882s)
	I0930 21:07:42.972055   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0930 21:07:42.972083   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.8977868s)
	I0930 21:07:42.972110   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0930 21:07:42.972086   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:42.972155   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:44.835190   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.863005436s)
	I0930 21:07:44.835237   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0930 21:07:44.835263   73375 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:44.835334   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:41.810719   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:41.811099   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:41.811117   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:41.811050   74769 retry.go:31] will retry after 2.703489988s: waiting for machine to come up
	I0930 21:07:44.515949   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:44.516329   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:44.516356   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:44.516276   74769 retry.go:31] will retry after 4.001267434s: waiting for machine to come up
	I0930 21:07:49.889033   73900 start.go:364] duration metric: took 4m7.028659379s to acquireMachinesLock for "old-k8s-version-621406"
	I0930 21:07:49.889104   73900 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:49.889111   73900 fix.go:54] fixHost starting: 
	I0930 21:07:49.889542   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:49.889600   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:49.906767   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0930 21:07:49.907283   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:49.907856   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:07:49.907889   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:49.908203   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:49.908397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:07:49.908542   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetState
	I0930 21:07:49.910270   73900 fix.go:112] recreateIfNeeded on old-k8s-version-621406: state=Stopped err=<nil>
	I0930 21:07:49.910306   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	W0930 21:07:49.910441   73900 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:49.912646   73900 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-621406" ...
	I0930 21:07:45.483728   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0930 21:07:45.483778   73375 cache_images.go:123] Successfully loaded all cached images
	I0930 21:07:45.483785   73375 cache_images.go:92] duration metric: took 14.689240439s to LoadCachedImages
	I0930 21:07:45.483799   73375 kubeadm.go:934] updating node { 192.168.61.93 8443 v1.31.1 crio true true} ...
	I0930 21:07:45.483898   73375 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-997816 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:07:45.483977   73375 ssh_runner.go:195] Run: crio config
	I0930 21:07:45.529537   73375 cni.go:84] Creating CNI manager for ""
	I0930 21:07:45.529558   73375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:45.529567   73375 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:07:45.529591   73375 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.93 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-997816 NodeName:no-preload-997816 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:07:45.529713   73375 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-997816"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.93
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.93"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:07:45.529775   73375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:07:45.540251   73375 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:07:45.540323   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:07:45.549622   73375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0930 21:07:45.565425   73375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:07:45.580646   73375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0930 21:07:45.596216   73375 ssh_runner.go:195] Run: grep 192.168.61.93	control-plane.minikube.internal$ /etc/hosts
	I0930 21:07:45.604940   73375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.93	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:45.620809   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:45.751327   73375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:45.768664   73375 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816 for IP: 192.168.61.93
	I0930 21:07:45.768687   73375 certs.go:194] generating shared ca certs ...
	I0930 21:07:45.768702   73375 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:45.768896   73375 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:07:45.768953   73375 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:07:45.768967   73375 certs.go:256] generating profile certs ...
	I0930 21:07:45.769081   73375 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/client.key
	I0930 21:07:45.769188   73375 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.key.c7192a03
	I0930 21:07:45.769251   73375 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.key
	I0930 21:07:45.769422   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:07:45.769468   73375 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:07:45.769483   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:07:45.769527   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:07:45.769569   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:07:45.769603   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:07:45.769672   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:45.770679   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:07:45.809391   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:07:45.837624   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:07:45.878472   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:07:45.909163   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 21:07:45.950655   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 21:07:45.974391   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:07:45.997258   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 21:07:46.019976   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:07:46.042828   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:07:46.066625   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:07:46.089639   73375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:07:46.106202   73375 ssh_runner.go:195] Run: openssl version
	I0930 21:07:46.111810   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:07:46.122379   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.126659   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.126699   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.132363   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:07:46.143074   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:07:46.154060   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.158542   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.158602   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.164210   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:07:46.175160   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:07:46.186326   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.190782   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.190856   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.196356   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:07:46.206957   73375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:07:46.211650   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:07:46.217398   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:07:46.223566   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:07:46.230204   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:07:46.236404   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:07:46.242282   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:07:46.248591   73375 kubeadm.go:392] StartCluster: {Name:no-preload-997816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:07:46.248686   73375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:07:46.248731   73375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:46.292355   73375 cri.go:89] found id: ""
	I0930 21:07:46.292435   73375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:07:46.303578   73375 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:07:46.303598   73375 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:07:46.303668   73375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:07:46.314544   73375 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:07:46.315643   73375 kubeconfig.go:125] found "no-preload-997816" server: "https://192.168.61.93:8443"
	I0930 21:07:46.318243   73375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:07:46.329751   73375 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.93
	I0930 21:07:46.329781   73375 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:07:46.329791   73375 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:07:46.329837   73375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:46.364302   73375 cri.go:89] found id: ""
	I0930 21:07:46.364392   73375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:07:46.384616   73375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:07:46.395855   73375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:07:46.395875   73375 kubeadm.go:157] found existing configuration files:
	
	I0930 21:07:46.395915   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:07:46.405860   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:07:46.405918   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:07:46.416618   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:07:46.426654   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:07:46.426712   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:07:46.435880   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:07:46.446273   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:07:46.446346   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:07:46.457099   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:07:46.467322   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:07:46.467386   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:07:46.477809   73375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:07:46.489024   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:46.605127   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.509287   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.708716   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.780830   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.883843   73375 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:07:47.883940   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.384688   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.884008   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.925804   73375 api_server.go:72] duration metric: took 1.041960261s to wait for apiserver process to appear ...
	I0930 21:07:48.925833   73375 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:07:48.925857   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:48.521282   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.521838   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Found IP for machine: 192.168.50.2
	I0930 21:07:48.521864   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Reserving static IP address...
	I0930 21:07:48.521876   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has current primary IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.522306   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Reserved static IP address: 192.168.50.2
	I0930 21:07:48.522349   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-291511", mac: "52:54:00:27:46:45", ip: "192.168.50.2"} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.522361   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for SSH to be available...
	I0930 21:07:48.522401   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | skip adding static IP to network mk-default-k8s-diff-port-291511 - found existing host DHCP lease matching {name: "default-k8s-diff-port-291511", mac: "52:54:00:27:46:45", ip: "192.168.50.2"}
	I0930 21:07:48.522427   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Getting to WaitForSSH function...
	I0930 21:07:48.525211   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.525641   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.525667   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.525827   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Using SSH client type: external
	I0930 21:07:48.525854   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa (-rw-------)
	I0930 21:07:48.525883   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:07:48.525900   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | About to run SSH command:
	I0930 21:07:48.525913   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | exit 0
	I0930 21:07:48.655656   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | SSH cmd err, output: <nil>: 
	I0930 21:07:48.656045   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetConfigRaw
	I0930 21:07:48.656789   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:48.659902   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.660358   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.660395   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.660586   73707 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/config.json ...
	I0930 21:07:48.660842   73707 machine.go:93] provisionDockerMachine start ...
	I0930 21:07:48.660866   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:48.661063   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.663782   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.664138   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.664165   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.664318   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.664567   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.664733   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.664868   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.665036   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.665283   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.665315   73707 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:07:48.776382   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:07:48.776414   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:48.776676   73707 buildroot.go:166] provisioning hostname "default-k8s-diff-port-291511"
	I0930 21:07:48.776711   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:48.776913   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.779952   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.780470   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.780516   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.780594   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.780773   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.780925   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.781080   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.781253   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.781457   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.781473   73707 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-291511 && echo "default-k8s-diff-port-291511" | sudo tee /etc/hostname
	I0930 21:07:48.913633   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-291511
	
	I0930 21:07:48.913724   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.916869   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.917280   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.917319   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.917501   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.917715   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.917882   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.918117   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.918296   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.918533   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.918562   73707 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-291511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-291511/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-291511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:07:49.048106   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:49.048141   73707 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:07:49.048182   73707 buildroot.go:174] setting up certificates
	I0930 21:07:49.048198   73707 provision.go:84] configureAuth start
	I0930 21:07:49.048212   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:49.048498   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:49.051299   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.051665   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.051702   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.051837   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.054211   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.054512   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.054540   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.054691   73707 provision.go:143] copyHostCerts
	I0930 21:07:49.054774   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:07:49.054789   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:07:49.054866   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:07:49.054982   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:07:49.054994   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:07:49.055021   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:07:49.055097   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:07:49.055106   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:07:49.055130   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:07:49.055189   73707 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-291511 san=[127.0.0.1 192.168.50.2 default-k8s-diff-port-291511 localhost minikube]
	I0930 21:07:49.239713   73707 provision.go:177] copyRemoteCerts
	I0930 21:07:49.239771   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:07:49.239796   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.242146   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.242468   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.242500   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.242663   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.242834   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.242982   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.243200   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.329405   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:07:49.358036   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0930 21:07:49.385742   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:07:49.409436   73707 provision.go:87] duration metric: took 361.22398ms to configureAuth
	I0930 21:07:49.409493   73707 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:07:49.409696   73707 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:49.409798   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.412572   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.412935   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.412975   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.413266   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.413476   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.413680   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.413821   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.414009   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:49.414199   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:49.414223   73707 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:07:49.635490   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:07:49.635553   73707 machine.go:96] duration metric: took 974.696002ms to provisionDockerMachine
	I0930 21:07:49.635567   73707 start.go:293] postStartSetup for "default-k8s-diff-port-291511" (driver="kvm2")
	I0930 21:07:49.635580   73707 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:07:49.635603   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.635954   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:07:49.635989   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.638867   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.639304   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.639340   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.639413   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.639631   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.639837   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.639995   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.728224   73707 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:07:49.732558   73707 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:07:49.732590   73707 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:07:49.732679   73707 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:07:49.732769   73707 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:07:49.732869   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:07:49.742783   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:49.766585   73707 start.go:296] duration metric: took 131.002562ms for postStartSetup
	I0930 21:07:49.766629   73707 fix.go:56] duration metric: took 20.430290493s for fixHost
	I0930 21:07:49.766652   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.769724   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.770143   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.770172   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.770461   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.770708   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.770872   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.771099   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.771240   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:49.771616   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:49.771636   73707 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:07:49.888863   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730469.865719956
	
	I0930 21:07:49.888889   73707 fix.go:216] guest clock: 1727730469.865719956
	I0930 21:07:49.888900   73707 fix.go:229] Guest: 2024-09-30 21:07:49.865719956 +0000 UTC Remote: 2024-09-30 21:07:49.76663417 +0000 UTC m=+259.507652750 (delta=99.085786ms)
	I0930 21:07:49.888943   73707 fix.go:200] guest clock delta is within tolerance: 99.085786ms
	I0930 21:07:49.888950   73707 start.go:83] releasing machines lock for "default-k8s-diff-port-291511", held for 20.552679126s
	I0930 21:07:49.888982   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.889242   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:49.892424   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.892817   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.892854   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.893030   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893601   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893780   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893852   73707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:07:49.893932   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.893934   73707 ssh_runner.go:195] Run: cat /version.json
	I0930 21:07:49.893985   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.896733   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.896843   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897130   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.897179   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897216   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.897233   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897471   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.897478   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.897679   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.897686   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.897825   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.897834   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.897954   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.898097   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:50.022951   73707 ssh_runner.go:195] Run: systemctl --version
	I0930 21:07:50.029177   73707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:07:50.186430   73707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:07:50.193205   73707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:07:50.193277   73707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:07:50.211330   73707 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:07:50.211365   73707 start.go:495] detecting cgroup driver to use...
	I0930 21:07:50.211430   73707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:07:50.227255   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:07:50.241404   73707 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:07:50.241468   73707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:07:50.257879   73707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:07:50.274595   73707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:07:50.394354   73707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:07:50.567503   73707 docker.go:233] disabling docker service ...
	I0930 21:07:50.567582   73707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:07:50.584390   73707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:07:50.600920   73707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:07:50.742682   73707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:07:50.882835   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:07:50.898340   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:07:50.919395   73707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:07:50.919464   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.930773   73707 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:07:50.930846   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.941870   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.952633   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.964281   73707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:07:50.977410   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.988423   73707 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:51.016091   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:51.027473   73707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:07:51.037470   73707 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:07:51.037537   73707 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:07:51.056841   73707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:07:51.068163   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:51.205357   73707 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:07:51.305327   73707 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:07:51.305410   73707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:07:51.311384   73707 start.go:563] Will wait 60s for crictl version
	I0930 21:07:51.311448   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:07:51.315965   73707 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:07:51.369329   73707 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:07:51.369417   73707 ssh_runner.go:195] Run: crio --version
	I0930 21:07:51.399897   73707 ssh_runner.go:195] Run: crio --version
	I0930 21:07:51.431075   73707 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:07:49.914747   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .Start
	I0930 21:07:49.914948   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring networks are active...
	I0930 21:07:49.915796   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network default is active
	I0930 21:07:49.916225   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network mk-old-k8s-version-621406 is active
	I0930 21:07:49.916890   73900 main.go:141] libmachine: (old-k8s-version-621406) Getting domain xml...
	I0930 21:07:49.917688   73900 main.go:141] libmachine: (old-k8s-version-621406) Creating domain...
	I0930 21:07:51.277867   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting to get IP...
	I0930 21:07:51.279001   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.279451   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.279552   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.279437   74917 retry.go:31] will retry after 307.582619ms: waiting for machine to come up
	I0930 21:07:51.589030   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.589414   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.589445   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.589368   74917 retry.go:31] will retry after 370.683214ms: waiting for machine to come up
	I0930 21:07:51.961914   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.962474   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.962511   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.962415   74917 retry.go:31] will retry after 428.703419ms: waiting for machine to come up
	I0930 21:07:52.393154   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.393682   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.393750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.393673   74917 retry.go:31] will retry after 514.254023ms: waiting for machine to come up
	I0930 21:07:52.334804   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:07:52.334846   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:07:52.334863   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.377601   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:07:52.377632   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:07:52.426784   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.473771   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:52.473811   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:52.926391   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.945122   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:52.945154   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:53.426295   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:53.434429   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:53.434464   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:53.926642   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:53.931501   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0930 21:07:53.940069   73375 api_server.go:141] control plane version: v1.31.1
	I0930 21:07:53.940104   73375 api_server.go:131] duration metric: took 5.014262318s to wait for apiserver health ...
	I0930 21:07:53.940115   73375 cni.go:84] Creating CNI manager for ""
	I0930 21:07:53.940123   73375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:53.941879   73375 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:07:53.943335   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:07:53.959585   73375 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:07:53.996310   73375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:07:54.010070   73375 system_pods.go:59] 8 kube-system pods found
	I0930 21:07:54.010129   73375 system_pods.go:61] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:07:54.010142   73375 system_pods.go:61] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:07:54.010157   73375 system_pods.go:61] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:07:54.010174   73375 system_pods.go:61] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:07:54.010186   73375 system_pods.go:61] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:07:54.010200   73375 system_pods.go:61] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:07:54.010212   73375 system_pods.go:61] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:07:54.010223   73375 system_pods.go:61] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:07:54.010232   73375 system_pods.go:74] duration metric: took 13.897885ms to wait for pod list to return data ...
	I0930 21:07:54.010244   73375 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:07:54.019651   73375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:07:54.019683   73375 node_conditions.go:123] node cpu capacity is 2
	I0930 21:07:54.019697   73375 node_conditions.go:105] duration metric: took 9.446744ms to run NodePressure ...
	I0930 21:07:54.019719   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:54.314348   73375 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:07:54.319583   73375 kubeadm.go:739] kubelet initialised
	I0930 21:07:54.319613   73375 kubeadm.go:740] duration metric: took 5.232567ms waiting for restarted kubelet to initialise ...
	I0930 21:07:54.319625   73375 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:07:54.326866   73375 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.333592   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.333628   73375 pod_ready.go:82] duration metric: took 6.72431ms for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.333640   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.333651   73375 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.340155   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "etcd-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.340194   73375 pod_ready.go:82] duration metric: took 6.533127ms for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.340208   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "etcd-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.340216   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.346494   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-apiserver-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.346530   73375 pod_ready.go:82] duration metric: took 6.304143ms for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.346542   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-apiserver-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.346551   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.403699   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.403731   73375 pod_ready.go:82] duration metric: took 57.168471ms for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.403743   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.403752   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.800372   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-proxy-klcv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.800410   73375 pod_ready.go:82] duration metric: took 396.646883ms for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.800423   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-proxy-klcv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.800432   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:51.432761   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:51.436278   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:51.436659   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:51.436700   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:51.436931   73707 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0930 21:07:51.441356   73707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:51.454358   73707 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-291511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:07:51.454484   73707 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:07:51.454547   73707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:51.502072   73707 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:07:51.502143   73707 ssh_runner.go:195] Run: which lz4
	I0930 21:07:51.506458   73707 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:07:51.510723   73707 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:07:51.510756   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 21:07:52.792488   73707 crio.go:462] duration metric: took 1.286075452s to copy over tarball
	I0930 21:07:52.792580   73707 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:07:55.207282   73707 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.414661305s)
	I0930 21:07:55.207314   73707 crio.go:469] duration metric: took 2.414793514s to extract the tarball
	I0930 21:07:55.207321   73707 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:07:55.244001   73707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:55.287097   73707 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 21:07:55.287124   73707 cache_images.go:84] Images are preloaded, skipping loading
	I0930 21:07:55.287133   73707 kubeadm.go:934] updating node { 192.168.50.2 8444 v1.31.1 crio true true} ...
	I0930 21:07:55.287277   73707 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-291511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:07:55.287384   73707 ssh_runner.go:195] Run: crio config
	I0930 21:07:55.200512   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-scheduler-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.200559   73375 pod_ready.go:82] duration metric: took 400.11341ms for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:55.200569   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-scheduler-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.200577   73375 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:55.601008   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.601042   73375 pod_ready.go:82] duration metric: took 400.453601ms for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:55.601055   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.601065   73375 pod_ready.go:39] duration metric: took 1.281429189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:07:55.601086   73375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:07:55.617767   73375 ops.go:34] apiserver oom_adj: -16
	I0930 21:07:55.617791   73375 kubeadm.go:597] duration metric: took 9.314187459s to restartPrimaryControlPlane
	I0930 21:07:55.617803   73375 kubeadm.go:394] duration metric: took 9.369220314s to StartCluster
	I0930 21:07:55.617824   73375 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.617913   73375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:07:55.619455   73375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.619760   73375 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:07:55.619842   73375 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:07:55.619959   73375 addons.go:69] Setting storage-provisioner=true in profile "no-preload-997816"
	I0930 21:07:55.619984   73375 addons.go:234] Setting addon storage-provisioner=true in "no-preload-997816"
	I0930 21:07:55.619974   73375 addons.go:69] Setting default-storageclass=true in profile "no-preload-997816"
	I0930 21:07:55.620003   73375 addons.go:69] Setting metrics-server=true in profile "no-preload-997816"
	I0930 21:07:55.620009   73375 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-997816"
	I0930 21:07:55.620020   73375 addons.go:234] Setting addon metrics-server=true in "no-preload-997816"
	W0930 21:07:55.620031   73375 addons.go:243] addon metrics-server should already be in state true
	I0930 21:07:55.620050   73375 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:55.620061   73375 host.go:66] Checking if "no-preload-997816" exists ...
	W0930 21:07:55.619994   73375 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:07:55.620124   73375 host.go:66] Checking if "no-preload-997816" exists ...
	I0930 21:07:55.620420   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620459   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.620494   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620535   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.620593   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620634   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.621682   73375 out.go:177] * Verifying Kubernetes components...
	I0930 21:07:55.623102   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:55.643690   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0930 21:07:55.643895   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35545
	I0930 21:07:55.644411   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.644553   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.644968   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.644981   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.645072   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.645078   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.645314   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.645502   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.645732   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.645777   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.645812   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.649244   73375 addons.go:234] Setting addon default-storageclass=true in "no-preload-997816"
	W0930 21:07:55.649262   73375 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:07:55.649283   73375 host.go:66] Checking if "no-preload-997816" exists ...
	I0930 21:07:55.649524   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.649548   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.671077   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42635
	I0930 21:07:55.671558   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.672193   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.672212   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.672505   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45163
	I0930 21:07:55.672736   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.672808   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I0930 21:07:55.673354   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.673396   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.673920   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.673926   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.674528   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.674545   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.674974   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.675624   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.675658   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.676078   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.676095   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.676547   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.676724   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.679115   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.681410   73375 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:55.688953   73375 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:07:55.688981   73375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:07:55.689015   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.693338   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.693996   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.694023   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.694212   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.694344   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.694444   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.694545   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.696037   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0930 21:07:55.696535   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.697185   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.697207   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.697567   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.697772   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.699797   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.700998   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0930 21:07:55.701429   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.702094   73375 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:07:52.909622   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.910169   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.910202   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.910132   74917 retry.go:31] will retry after 605.019848ms: waiting for machine to come up
	I0930 21:07:53.517276   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:53.517911   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:53.517943   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:53.517858   74917 retry.go:31] will retry after 856.018614ms: waiting for machine to come up
	I0930 21:07:54.376343   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:54.376838   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:54.376862   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:54.376794   74917 retry.go:31] will retry after 740.749778ms: waiting for machine to come up
	I0930 21:07:55.119090   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:55.119631   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:55.119660   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:55.119583   74917 retry.go:31] will retry after 1.444139076s: waiting for machine to come up
	I0930 21:07:56.566261   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:56.566744   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:56.566771   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:56.566695   74917 retry.go:31] will retry after 1.681362023s: waiting for machine to come up
	I0930 21:07:55.703687   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:07:55.703709   73375 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:07:55.703736   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.703788   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.703816   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.704295   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.704553   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.707029   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.707365   73375 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:07:55.707385   73375 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:07:55.707408   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.708091   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.708606   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.708629   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.709024   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.709237   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.709388   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.709573   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.711123   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.711607   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.711631   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.711987   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.712178   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.712318   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.712469   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.888447   73375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:55.912060   73375 node_ready.go:35] waiting up to 6m0s for node "no-preload-997816" to be "Ready" ...
	I0930 21:07:56.010903   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:07:56.012576   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:07:56.012601   73375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:07:56.038592   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:07:56.055481   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:07:56.055513   73375 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:07:56.131820   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:07:56.131844   73375 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:07:56.213605   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:07:57.078385   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.067447636s)
	I0930 21:07:57.078439   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.078451   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.078770   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.078823   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:57.078836   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.078845   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.078793   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:57.079118   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:57.079149   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.079157   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:57.672706   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.672737   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.673053   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.673072   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:58.301165   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:59.072488   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.858837368s)
	I0930 21:07:59.072565   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.072582   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.072921   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.072986   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073029   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073038   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.073221   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.034599023s)
	I0930 21:07:59.073271   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073344   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.073383   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073397   73375 addons.go:475] Verifying addon metrics-server=true in "no-preload-997816"
	I0930 21:07:59.073347   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.073754   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:59.073804   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.073819   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073834   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073846   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.075323   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:59.075329   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.075353   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.077687   73375 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0930 21:07:59.079278   73375 addons.go:510] duration metric: took 3.459453938s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0930 21:07:55.346656   73707 cni.go:84] Creating CNI manager for ""
	I0930 21:07:55.346679   73707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:55.346688   73707 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:07:55.346718   73707 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-291511 NodeName:default-k8s-diff-port-291511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:07:55.346847   73707 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-291511"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:07:55.346903   73707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:07:55.356645   73707 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:07:55.356708   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:07:55.366457   73707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0930 21:07:55.384639   73707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:07:55.403208   73707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0930 21:07:55.421878   73707 ssh_runner.go:195] Run: grep 192.168.50.2	control-plane.minikube.internal$ /etc/hosts
	I0930 21:07:55.425803   73707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:55.439370   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:55.553575   73707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:55.570754   73707 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511 for IP: 192.168.50.2
	I0930 21:07:55.570787   73707 certs.go:194] generating shared ca certs ...
	I0930 21:07:55.570808   73707 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.571011   73707 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:07:55.571067   73707 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:07:55.571083   73707 certs.go:256] generating profile certs ...
	I0930 21:07:55.571178   73707 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/client.key
	I0930 21:07:55.571270   73707 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.key.2e3224d9
	I0930 21:07:55.571326   73707 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.key
	I0930 21:07:55.571464   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:07:55.571510   73707 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:07:55.571522   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:07:55.571587   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:07:55.571627   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:07:55.571655   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:07:55.571719   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:55.572367   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:07:55.606278   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:07:55.645629   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:07:55.690514   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:07:55.737445   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 21:07:55.773656   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 21:07:55.804015   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:07:55.830210   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:07:55.857601   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:07:55.887765   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:07:55.922053   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:07:55.951040   73707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:07:55.969579   73707 ssh_runner.go:195] Run: openssl version
	I0930 21:07:55.975576   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:07:55.987255   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:07:55.993657   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:07:55.993723   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:07:56.001878   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:07:56.017528   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:07:56.030398   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.035552   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.035625   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.043878   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:07:56.055384   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:07:56.066808   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.073099   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.073164   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.081343   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:07:56.096669   73707 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:07:56.102635   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:07:56.110805   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:07:56.118533   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:07:56.125800   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:07:56.133985   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:07:56.142109   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:07:56.150433   73707 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-291511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:07:56.150538   73707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:07:56.150608   73707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:56.197936   73707 cri.go:89] found id: ""
	I0930 21:07:56.198016   73707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:07:56.208133   73707 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:07:56.208155   73707 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:07:56.208204   73707 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:07:56.218880   73707 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:07:56.220322   73707 kubeconfig.go:125] found "default-k8s-diff-port-291511" server: "https://192.168.50.2:8444"
	I0930 21:07:56.223557   73707 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:07:56.233844   73707 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.2
	I0930 21:07:56.233876   73707 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:07:56.233889   73707 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:07:56.233970   73707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:56.280042   73707 cri.go:89] found id: ""
	I0930 21:07:56.280129   73707 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:07:56.304291   73707 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:07:56.317987   73707 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:07:56.318012   73707 kubeadm.go:157] found existing configuration files:
	
	I0930 21:07:56.318076   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0930 21:07:56.331377   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:07:56.331448   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:07:56.342380   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0930 21:07:56.354949   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:07:56.355030   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:07:56.368385   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0930 21:07:56.378798   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:07:56.378883   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:07:56.390167   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0930 21:07:56.400338   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:07:56.400413   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:07:56.410735   73707 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:07:56.426910   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:56.557126   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.682738   73707 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125574645s)
	I0930 21:07:57.682777   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.908684   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.983925   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:58.088822   73707 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:07:58.088930   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:58.589565   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:59.089483   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:59.110240   73707 api_server.go:72] duration metric: took 1.021416929s to wait for apiserver process to appear ...
	I0930 21:07:59.110279   73707 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:07:59.110328   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:07:59.110843   73707 api_server.go:269] stopped: https://192.168.50.2:8444/healthz: Get "https://192.168.50.2:8444/healthz": dial tcp 192.168.50.2:8444: connect: connection refused
	I0930 21:07:59.611045   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:07:58.250468   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:58.251041   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:58.251062   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:58.250979   74917 retry.go:31] will retry after 2.260492343s: waiting for machine to come up
	I0930 21:08:00.513613   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:00.514129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:00.514194   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:00.514117   74917 retry.go:31] will retry after 2.449694064s: waiting for machine to come up
	I0930 21:08:02.200888   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:02.200918   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:02.200930   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:02.240477   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:02.240513   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:02.611111   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:02.615548   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:02.615578   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:03.111216   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:03.118078   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:03.118102   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:03.610614   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:03.615203   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 200:
	ok
	I0930 21:08:03.621652   73707 api_server.go:141] control plane version: v1.31.1
	I0930 21:08:03.621680   73707 api_server.go:131] duration metric: took 4.511393989s to wait for apiserver health ...
	I0930 21:08:03.621689   73707 cni.go:84] Creating CNI manager for ""
	I0930 21:08:03.621694   73707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:03.624026   73707 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:08:00.416356   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:08:02.416469   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:08:02.916643   73375 node_ready.go:49] node "no-preload-997816" has status "Ready":"True"
	I0930 21:08:02.916668   73375 node_ready.go:38] duration metric: took 7.004576501s for node "no-preload-997816" to be "Ready" ...
	I0930 21:08:02.916679   73375 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:02.922833   73375 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:02.928873   73375 pod_ready.go:93] pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:02.928895   73375 pod_ready.go:82] duration metric: took 6.034388ms for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:02.928904   73375 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.934668   73375 pod_ready.go:103] pod "etcd-no-preload-997816" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:03.625416   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:08:03.640241   73707 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:08:03.664231   73707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:08:03.679372   73707 system_pods.go:59] 8 kube-system pods found
	I0930 21:08:03.679409   73707 system_pods.go:61] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:08:03.679425   73707 system_pods.go:61] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:08:03.679435   73707 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:08:03.679447   73707 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:08:03.679456   73707 system_pods.go:61] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 21:08:03.679466   73707 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:08:03.679472   73707 system_pods.go:61] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:08:03.679482   73707 system_pods.go:61] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 21:08:03.679490   73707 system_pods.go:74] duration metric: took 15.234407ms to wait for pod list to return data ...
	I0930 21:08:03.679509   73707 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:08:03.698332   73707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:08:03.698363   73707 node_conditions.go:123] node cpu capacity is 2
	I0930 21:08:03.698374   73707 node_conditions.go:105] duration metric: took 18.857709ms to run NodePressure ...
	I0930 21:08:03.698394   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:03.968643   73707 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:08:03.974075   73707 kubeadm.go:739] kubelet initialised
	I0930 21:08:03.974098   73707 kubeadm.go:740] duration metric: took 5.424573ms waiting for restarted kubelet to initialise ...
	I0930 21:08:03.974105   73707 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:03.982157   73707 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:03.989298   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.989329   73707 pod_ready.go:82] duration metric: took 7.140381ms for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:03.989338   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.989345   73707 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:03.995739   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.995773   73707 pod_ready.go:82] duration metric: took 6.418854ms for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:03.995787   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.995797   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.002071   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.002093   73707 pod_ready.go:82] duration metric: took 6.287919ms for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.002104   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.002110   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.071732   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.071760   73707 pod_ready.go:82] duration metric: took 69.643681ms for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.071771   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.071777   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.468580   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-proxy-kwp22" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.468605   73707 pod_ready.go:82] duration metric: took 396.820558ms for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.468614   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-proxy-kwp22" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.468620   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.868042   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.868067   73707 pod_ready.go:82] duration metric: took 399.438278ms for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.868078   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.868085   73707 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.267893   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:05.267925   73707 pod_ready.go:82] duration metric: took 399.831615ms for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:05.267937   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:05.267945   73707 pod_ready.go:39] duration metric: took 1.293832472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:05.267960   73707 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:08:05.282162   73707 ops.go:34] apiserver oom_adj: -16
	I0930 21:08:05.282188   73707 kubeadm.go:597] duration metric: took 9.074027172s to restartPrimaryControlPlane
	I0930 21:08:05.282199   73707 kubeadm.go:394] duration metric: took 9.131777336s to StartCluster
	I0930 21:08:05.282216   73707 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:05.282338   73707 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:08:05.283862   73707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:05.284135   73707 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:08:05.284201   73707 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:08:05.284287   73707 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284305   73707 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.284313   73707 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:08:05.284340   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.284339   73707 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284385   73707 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:08:05.284399   73707 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-291511"
	I0930 21:08:05.284359   73707 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284432   73707 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.284448   73707 addons.go:243] addon metrics-server should already be in state true
	I0930 21:08:05.284486   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.284739   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284760   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284784   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.284794   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.284890   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284931   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.286020   73707 out.go:177] * Verifying Kubernetes components...
	I0930 21:08:05.287268   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:05.302045   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0930 21:08:05.302587   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.303190   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.303219   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.303631   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.304213   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.304258   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.304484   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0930 21:08:05.304676   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0930 21:08:05.304884   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.305175   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.305353   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.305377   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.305642   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.305660   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.305724   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.305933   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.306016   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.306580   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.306623   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.309757   73707 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.309778   73707 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:08:05.309805   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.310163   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.310208   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.320335   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0930 21:08:05.320928   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.321496   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.321520   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.321922   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.322082   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.324111   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.325867   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0930 21:08:05.325879   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0930 21:08:05.326252   73707 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:08:05.326337   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.326280   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.326847   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.326862   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.326982   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.326999   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.327239   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.327313   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.327467   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:08:05.327485   73707 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:08:05.327507   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.327597   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.327778   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.327806   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.329862   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.331454   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.331654   73707 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:05.331959   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.331996   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.332184   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.332355   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.332577   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.332699   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.332956   73707 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:08:05.332972   73707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:08:05.332990   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.336234   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.336634   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.336661   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.336885   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.337134   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.337271   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.337447   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.345334   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34613
	I0930 21:08:05.345908   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.346393   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.346424   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.346749   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.346887   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.348836   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.349033   73707 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:08:05.349048   73707 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:08:05.349067   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.351835   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.352222   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.352277   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.352401   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.352644   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.352786   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.352886   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.475274   73707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:05.496035   73707 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-291511" to be "Ready" ...
	I0930 21:08:05.564715   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:08:05.574981   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:08:05.575006   73707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:08:05.613799   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:08:05.613822   73707 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:08:05.618503   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:08:05.689563   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:08:05.689588   73707 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:08:05.769327   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:08:06.831657   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266911261s)
	I0930 21:08:06.831717   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.831727   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.831735   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.213199657s)
	I0930 21:08:06.831780   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.831797   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832054   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832071   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832079   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.832086   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832146   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832164   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832182   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832195   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.832203   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832291   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832305   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832316   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832477   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832483   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832512   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.838509   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.838534   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.838786   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.838801   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.838806   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.956747   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.187373699s)
	I0930 21:08:06.956803   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.956819   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.957097   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.958516   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.958531   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.958542   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.958548   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.958842   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.958863   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.958873   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.958875   73707 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-291511"
	I0930 21:08:06.961299   73707 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0930 21:08:02.965767   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:02.966135   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:02.966157   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:02.966086   74917 retry.go:31] will retry after 2.951226221s: waiting for machine to come up
	I0930 21:08:05.919389   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:05.919894   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:05.919937   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:05.919827   74917 retry.go:31] will retry after 2.747969391s: waiting for machine to come up
	I0930 21:08:09.916514   73256 start.go:364] duration metric: took 52.875691449s to acquireMachinesLock for "embed-certs-256103"
	I0930 21:08:09.916583   73256 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:08:09.916592   73256 fix.go:54] fixHost starting: 
	I0930 21:08:09.916972   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:09.917000   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:09.935009   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0930 21:08:09.935493   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:09.936052   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:08:09.936073   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:09.936443   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:09.936617   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:09.936762   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:08:09.938608   73256 fix.go:112] recreateIfNeeded on embed-certs-256103: state=Stopped err=<nil>
	I0930 21:08:09.938639   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	W0930 21:08:09.938811   73256 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:08:09.940789   73256 out.go:177] * Restarting existing kvm2 VM for "embed-certs-256103" ...
	I0930 21:08:05.936626   73375 pod_ready.go:93] pod "etcd-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:05.936660   73375 pod_ready.go:82] duration metric: took 3.007747597s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.936674   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.942154   73375 pod_ready.go:93] pod "kube-apiserver-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:05.942196   73375 pod_ready.go:82] duration metric: took 5.502965ms for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.942209   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.949366   73375 pod_ready.go:93] pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.949402   73375 pod_ready.go:82] duration metric: took 1.007183809s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.949413   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.955060   73375 pod_ready.go:93] pod "kube-proxy-klcv8" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.955088   73375 pod_ready.go:82] duration metric: took 5.667172ms for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.955100   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.961684   73375 pod_ready.go:93] pod "kube-scheduler-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.961706   73375 pod_ready.go:82] duration metric: took 6.597856ms for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.961718   73375 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:08.967525   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:06.962594   73707 addons.go:510] duration metric: took 1.678396512s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0930 21:08:07.499805   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:09.500771   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:08.671179   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.671686   73900 main.go:141] libmachine: (old-k8s-version-621406) Found IP for machine: 192.168.72.159
	I0930 21:08:08.671711   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserving static IP address...
	I0930 21:08:08.671729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has current primary IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.672178   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.672220   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | skip adding static IP to network mk-old-k8s-version-621406 - found existing host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"}
	I0930 21:08:08.672231   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserved static IP address: 192.168.72.159
	I0930 21:08:08.672246   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting for SSH to be available...
	I0930 21:08:08.672254   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Getting to WaitForSSH function...
	I0930 21:08:08.674566   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.674931   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.674969   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.675128   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH client type: external
	I0930 21:08:08.675170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa (-rw-------)
	I0930 21:08:08.675212   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:08:08.675229   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | About to run SSH command:
	I0930 21:08:08.675244   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | exit 0
	I0930 21:08:08.799368   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | SSH cmd err, output: <nil>: 
	I0930 21:08:08.799751   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetConfigRaw
	I0930 21:08:08.800421   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:08.803151   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803596   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.803620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803922   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:08:08.804195   73900 machine.go:93] provisionDockerMachine start ...
	I0930 21:08:08.804246   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:08.804502   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.806822   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807240   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.807284   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807521   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.807735   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.807890   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.808077   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.808239   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.808480   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.808493   73900 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:08:08.912058   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:08:08.912135   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912407   73900 buildroot.go:166] provisioning hostname "old-k8s-version-621406"
	I0930 21:08:08.912432   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912662   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.915366   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.915750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915892   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.916107   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916330   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916492   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.916673   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.916932   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.916957   73900 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-621406 && echo "old-k8s-version-621406" | sudo tee /etc/hostname
	I0930 21:08:09.034260   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-621406
	
	I0930 21:08:09.034296   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.037149   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037509   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.037538   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037799   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.037986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038163   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038327   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.038473   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.038695   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.038714   73900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-621406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-621406/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-621406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:08:09.152190   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:08:09.152228   73900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:08:09.152255   73900 buildroot.go:174] setting up certificates
	I0930 21:08:09.152275   73900 provision.go:84] configureAuth start
	I0930 21:08:09.152288   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:09.152577   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.155203   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155589   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.155620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155783   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.157964   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158362   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.158392   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158520   73900 provision.go:143] copyHostCerts
	I0930 21:08:09.158592   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:08:09.158605   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:08:09.158704   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:08:09.158851   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:08:09.158864   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:08:09.158895   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:08:09.158970   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:08:09.158977   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:08:09.158996   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:08:09.159054   73900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-621406 san=[127.0.0.1 192.168.72.159 localhost minikube old-k8s-version-621406]
	I0930 21:08:09.301267   73900 provision.go:177] copyRemoteCerts
	I0930 21:08:09.301322   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:08:09.301349   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.304344   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304766   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.304796   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304998   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.305187   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.305321   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.305439   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.390851   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0930 21:08:09.415712   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 21:08:09.439567   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:08:09.463427   73900 provision.go:87] duration metric: took 311.139024ms to configureAuth
	I0930 21:08:09.463459   73900 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:08:09.463713   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:08:09.463809   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.466757   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.467160   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467326   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.467513   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467694   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467843   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.468004   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.468175   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.468190   73900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:08:09.684657   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:08:09.684684   73900 machine.go:96] duration metric: took 880.473418ms to provisionDockerMachine
	I0930 21:08:09.684698   73900 start.go:293] postStartSetup for "old-k8s-version-621406" (driver="kvm2")
	I0930 21:08:09.684709   73900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:08:09.684730   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.685075   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:08:09.685114   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.688051   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688517   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.688542   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688725   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.688928   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.689070   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.689265   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.770572   73900 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:08:09.775149   73900 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:08:09.775181   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:08:09.775268   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:08:09.775364   73900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:08:09.775453   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:08:09.784753   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:09.807989   73900 start.go:296] duration metric: took 123.276522ms for postStartSetup
	I0930 21:08:09.808033   73900 fix.go:56] duration metric: took 19.918922935s for fixHost
	I0930 21:08:09.808053   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.811242   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811656   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.811692   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811852   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.812064   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812239   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812380   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.812522   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.812704   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.812719   73900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:08:09.916349   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730489.889323893
	
	I0930 21:08:09.916376   73900 fix.go:216] guest clock: 1727730489.889323893
	I0930 21:08:09.916384   73900 fix.go:229] Guest: 2024-09-30 21:08:09.889323893 +0000 UTC Remote: 2024-09-30 21:08:09.808037625 +0000 UTC m=+267.093327666 (delta=81.286268ms)
	I0930 21:08:09.916403   73900 fix.go:200] guest clock delta is within tolerance: 81.286268ms
	I0930 21:08:09.916408   73900 start.go:83] releasing machines lock for "old-k8s-version-621406", held for 20.027328296s
	I0930 21:08:09.916440   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.916766   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.919729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920070   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.920105   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920238   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.920831   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921050   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921182   73900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:08:09.921235   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.921328   73900 ssh_runner.go:195] Run: cat /version.json
	I0930 21:08:09.921351   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.924258   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924650   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.924695   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924805   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.924986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.925176   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925206   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.925341   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.925405   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.925534   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925698   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925829   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:10.043500   73900 ssh_runner.go:195] Run: systemctl --version
	I0930 21:08:10.051029   73900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:08:10.199844   73900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:08:10.206433   73900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:08:10.206519   73900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:08:10.223346   73900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:08:10.223375   73900 start.go:495] detecting cgroup driver to use...
	I0930 21:08:10.223449   73900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:08:10.241056   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:08:10.257197   73900 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:08:10.257261   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:08:10.271847   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:08:10.287465   73900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:08:10.419248   73900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:08:10.583440   73900 docker.go:233] disabling docker service ...
	I0930 21:08:10.583518   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:08:10.599561   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:08:10.613321   73900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:08:10.763071   73900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:08:10.891222   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:08:10.906985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:08:10.927838   73900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0930 21:08:10.927911   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.940002   73900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:08:10.940084   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.953143   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.965922   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.985782   73900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:08:11.001825   73900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:08:11.015777   73900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:08:11.015835   73900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:08:11.034821   73900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:08:11.049855   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:11.203755   73900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:08:11.312949   73900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:08:11.313060   73900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:08:11.319280   73900 start.go:563] Will wait 60s for crictl version
	I0930 21:08:11.319355   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:11.323826   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:08:11.374934   73900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:08:11.375023   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.415466   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.449622   73900 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0930 21:08:11.450773   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:11.454019   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454504   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:11.454534   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454807   73900 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0930 21:08:11.459034   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:11.473162   73900 kubeadm.go:883] updating cluster {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:08:11.473294   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:08:11.473367   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:11.518200   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:11.518275   73900 ssh_runner.go:195] Run: which lz4
	I0930 21:08:11.522442   73900 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:08:11.526704   73900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:08:11.526752   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0930 21:08:09.942356   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Start
	I0930 21:08:09.942591   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring networks are active...
	I0930 21:08:09.943619   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring network default is active
	I0930 21:08:09.944145   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring network mk-embed-certs-256103 is active
	I0930 21:08:09.944659   73256 main.go:141] libmachine: (embed-certs-256103) Getting domain xml...
	I0930 21:08:09.945567   73256 main.go:141] libmachine: (embed-certs-256103) Creating domain...
	I0930 21:08:11.376075   73256 main.go:141] libmachine: (embed-certs-256103) Waiting to get IP...
	I0930 21:08:11.377049   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.377588   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.377687   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.377579   75193 retry.go:31] will retry after 219.057799ms: waiting for machine to come up
	I0930 21:08:11.598062   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.598531   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.598568   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.598491   75193 retry.go:31] will retry after 288.150233ms: waiting for machine to come up
	I0930 21:08:11.887894   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.888719   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.888749   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.888678   75193 retry.go:31] will retry after 422.70153ms: waiting for machine to come up
	I0930 21:08:12.313280   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:12.313761   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:12.313790   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:12.313728   75193 retry.go:31] will retry after 403.507934ms: waiting for machine to come up
	I0930 21:08:12.719305   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:12.719705   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:12.719740   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:12.719683   75193 retry.go:31] will retry after 616.261723ms: waiting for machine to come up
	I0930 21:08:13.337223   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:13.337759   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:13.337809   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:13.337727   75193 retry.go:31] will retry after 715.496762ms: waiting for machine to come up
	I0930 21:08:14.054455   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:14.055118   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:14.055155   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:14.055041   75193 retry.go:31] will retry after 1.12512788s: waiting for machine to come up
	I0930 21:08:10.970621   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:13.468795   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:11.501276   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:12.501748   73707 node_ready.go:49] node "default-k8s-diff-port-291511" has status "Ready":"True"
	I0930 21:08:12.501784   73707 node_ready.go:38] duration metric: took 7.005705696s for node "default-k8s-diff-port-291511" to be "Ready" ...
	I0930 21:08:12.501797   73707 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:12.510080   73707 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:12.518496   73707 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:12.518522   73707 pod_ready.go:82] duration metric: took 8.414761ms for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:12.518535   73707 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.526615   73707 pod_ready.go:93] pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:14.526653   73707 pod_ready.go:82] duration metric: took 2.00810944s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.526666   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.533536   73707 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:14.533574   73707 pod_ready.go:82] duration metric: took 6.898769ms for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.533596   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.043003   73707 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.043034   73707 pod_ready.go:82] duration metric: took 509.429109ms for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.043048   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.049645   73707 pod_ready.go:93] pod "kube-proxy-kwp22" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.049676   73707 pod_ready.go:82] duration metric: took 6.618441ms for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.049688   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:13.134916   73900 crio.go:462] duration metric: took 1.612498859s to copy over tarball
	I0930 21:08:13.135038   73900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:08:16.170053   73900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.034985922s)
	I0930 21:08:16.170080   73900 crio.go:469] duration metric: took 3.035125251s to extract the tarball
	I0930 21:08:16.170088   73900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:08:16.213559   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:16.249853   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:16.249876   73900 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 21:08:16.249943   73900 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.249970   73900 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.249987   73900 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.250030   73900 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0930 21:08:16.250031   73900 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.250047   73900 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.250049   73900 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.250083   73900 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0930 21:08:16.251771   73900 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.251768   73900 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.251832   73900 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.251854   73900 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251891   73900 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.252031   73900 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.456847   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.468006   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0930 21:08:16.516253   73900 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0930 21:08:16.516294   73900 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.516336   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.524699   73900 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0930 21:08:16.524743   73900 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0930 21:08:16.524787   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.525738   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.529669   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.561946   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.569090   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.570589   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.571007   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.581971   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.587609   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.630323   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.711058   73900 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0930 21:08:16.711124   73900 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.711190   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.749473   73900 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0930 21:08:16.749521   73900 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.749585   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.769974   73900 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0930 21:08:16.770016   73900 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.770050   73900 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0930 21:08:16.770075   73900 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0930 21:08:16.770087   73900 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.770104   73900 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.770142   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770160   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.770064   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770144   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.788241   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.788292   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.788294   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.788339   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.847727   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0930 21:08:16.847798   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.847894   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.938964   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.939000   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.939053   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0930 21:08:16.939090   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.965556   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.965620   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.020497   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:17.074893   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:17.074950   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.090489   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:17.174117   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0930 21:08:17.174183   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0930 21:08:17.185553   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0930 21:08:17.185619   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0930 21:08:17.506064   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:17.650598   73900 cache_images.go:92] duration metric: took 1.400704992s to LoadCachedImages
	W0930 21:08:17.650695   73900 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0930 21:08:17.650710   73900 kubeadm.go:934] updating node { 192.168.72.159 8443 v1.20.0 crio true true} ...
	I0930 21:08:17.650834   73900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-621406 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:08:17.650922   73900 ssh_runner.go:195] Run: crio config
	I0930 21:08:17.710096   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:08:17.710124   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:17.710139   73900 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:08:17.710164   73900 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-621406 NodeName:old-k8s-version-621406 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0930 21:08:17.710349   73900 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-621406"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:08:17.710425   73900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0930 21:08:17.721028   73900 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:08:17.721111   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:08:17.731462   73900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0930 21:08:17.749715   73900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:08:15.182186   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:15.182722   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:15.182751   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:15.182673   75193 retry.go:31] will retry after 1.385891549s: waiting for machine to come up
	I0930 21:08:16.569882   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:16.570365   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:16.570386   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:16.570309   75193 retry.go:31] will retry after 1.417579481s: waiting for machine to come up
	I0930 21:08:17.989161   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:17.989876   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:17.989905   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:17.989818   75193 retry.go:31] will retry after 1.981651916s: waiting for machine to come up
	I0930 21:08:15.471221   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:17.969140   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:19.969688   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:15.300639   73707 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.300666   73707 pod_ready.go:82] duration metric: took 250.968899ms for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.300679   73707 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:17.349449   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:19.809813   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:17.767565   73900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0930 21:08:17.786411   73900 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0930 21:08:17.790338   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:17.803957   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:17.948898   73900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:17.969102   73900 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406 for IP: 192.168.72.159
	I0930 21:08:17.969133   73900 certs.go:194] generating shared ca certs ...
	I0930 21:08:17.969150   73900 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:17.969338   73900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:08:17.969387   73900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:08:17.969400   73900 certs.go:256] generating profile certs ...
	I0930 21:08:17.969543   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/client.key
	I0930 21:08:17.969621   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key.f3dc5056
	I0930 21:08:17.969674   73900 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key
	I0930 21:08:17.969833   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:08:17.969875   73900 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:08:17.969886   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:08:17.969926   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:08:17.969961   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:08:17.969999   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:08:17.970055   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:17.970794   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:08:18.007954   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:08:18.041538   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:08:18.077886   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:08:18.118644   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0930 21:08:18.151418   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:08:18.199572   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:08:18.235795   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:08:18.272729   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:08:18.298727   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:08:18.324074   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:08:18.351209   73900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:08:18.372245   73900 ssh_runner.go:195] Run: openssl version
	I0930 21:08:18.380047   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:08:18.395332   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401407   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401479   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.407744   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:08:18.422801   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:08:18.437946   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443864   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443938   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.451554   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:08:18.466856   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:08:18.479324   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484321   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484383   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.490341   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:08:18.503117   73900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:08:18.507986   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:08:18.514974   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:08:18.522140   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:08:18.529366   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:08:18.536056   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:08:18.542787   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:08:18.550311   73900 kubeadm.go:392] StartCluster: {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:08:18.550431   73900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:08:18.550498   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.593041   73900 cri.go:89] found id: ""
	I0930 21:08:18.593116   73900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:08:18.603410   73900 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:08:18.603432   73900 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:08:18.603479   73900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:08:18.614635   73900 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:08:18.615758   73900 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-621406" does not appear in /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:08:18.616488   73900 kubeconfig.go:62] /home/jenkins/minikube-integration/19736-7672/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-621406" cluster setting kubeconfig missing "old-k8s-version-621406" context setting]
	I0930 21:08:18.617394   73900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:18.644144   73900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:08:18.655764   73900 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.159
	I0930 21:08:18.655806   73900 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:08:18.655819   73900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:08:18.655877   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.699283   73900 cri.go:89] found id: ""
	I0930 21:08:18.699376   73900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:08:18.715248   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:08:18.724905   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:08:18.724945   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:08:18.724990   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:08:18.735611   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:08:18.735682   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:08:18.745604   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:08:18.755199   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:08:18.755261   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:08:18.765450   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.775187   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:08:18.775268   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.788080   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:08:18.800668   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:08:18.800727   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:08:18.814084   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:08:18.823785   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:18.961698   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.495418   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.713653   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.812667   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.921314   73900 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:08:19.921414   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.422349   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.922222   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.422364   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.921493   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:22.421640   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:19.973478   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:19.973916   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:19.973946   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:19.973868   75193 retry.go:31] will retry after 2.33355272s: waiting for machine to come up
	I0930 21:08:22.308828   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:22.309471   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:22.309498   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:22.309367   75193 retry.go:31] will retry after 3.484225075s: waiting for machine to come up
	I0930 21:08:21.970954   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:24.467778   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:22.310464   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:24.806425   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:22.922418   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.421851   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.921502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.422346   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.922000   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.422290   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.922213   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.422100   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.922239   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:27.421729   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.795265   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:25.795755   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:25.795781   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:25.795707   75193 retry.go:31] will retry after 2.983975719s: waiting for machine to come up
	I0930 21:08:28.780767   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.781201   73256 main.go:141] libmachine: (embed-certs-256103) Found IP for machine: 192.168.39.90
	I0930 21:08:28.781223   73256 main.go:141] libmachine: (embed-certs-256103) Reserving static IP address...
	I0930 21:08:28.781237   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has current primary IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.781655   73256 main.go:141] libmachine: (embed-certs-256103) Reserved static IP address: 192.168.39.90
	I0930 21:08:28.781679   73256 main.go:141] libmachine: (embed-certs-256103) Waiting for SSH to be available...
	I0930 21:08:28.781697   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "embed-certs-256103", mac: "52:54:00:7a:01:01", ip: "192.168.39.90"} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.781724   73256 main.go:141] libmachine: (embed-certs-256103) DBG | skip adding static IP to network mk-embed-certs-256103 - found existing host DHCP lease matching {name: "embed-certs-256103", mac: "52:54:00:7a:01:01", ip: "192.168.39.90"}
	I0930 21:08:28.781735   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Getting to WaitForSSH function...
	I0930 21:08:28.784310   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.784703   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.784737   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.784861   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Using SSH client type: external
	I0930 21:08:28.784899   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa (-rw-------)
	I0930 21:08:28.784933   73256 main.go:141] libmachine: (embed-certs-256103) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:08:28.784953   73256 main.go:141] libmachine: (embed-certs-256103) DBG | About to run SSH command:
	I0930 21:08:28.784970   73256 main.go:141] libmachine: (embed-certs-256103) DBG | exit 0
	I0930 21:08:28.911300   73256 main.go:141] libmachine: (embed-certs-256103) DBG | SSH cmd err, output: <nil>: 
	I0930 21:08:28.911716   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetConfigRaw
	I0930 21:08:28.912335   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:28.914861   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.915283   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.915304   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.915620   73256 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/config.json ...
	I0930 21:08:28.915874   73256 machine.go:93] provisionDockerMachine start ...
	I0930 21:08:28.915902   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:28.916117   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:28.918357   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.918661   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.918696   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.918813   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:28.918992   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:28.919143   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:28.919296   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:28.919472   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:28.919680   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:28.919691   73256 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:08:29.032537   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:08:29.032579   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.032830   73256 buildroot.go:166] provisioning hostname "embed-certs-256103"
	I0930 21:08:29.032857   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.033039   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.035951   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.036403   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.036435   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.036598   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.036795   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.037002   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.037175   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.037339   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.037538   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.037556   73256 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-256103 && echo "embed-certs-256103" | sudo tee /etc/hostname
	I0930 21:08:29.163250   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-256103
	
	I0930 21:08:29.163278   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.165937   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.166260   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.166296   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.166529   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.166722   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.166913   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.167055   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.167223   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.167454   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.167477   73256 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-256103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-256103/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-256103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:08:29.288197   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:08:29.288236   73256 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:08:29.288292   73256 buildroot.go:174] setting up certificates
	I0930 21:08:29.288307   73256 provision.go:84] configureAuth start
	I0930 21:08:29.288322   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.288589   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:29.291598   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.292026   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.292059   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.292247   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.294760   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.295144   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.295169   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.295421   73256 provision.go:143] copyHostCerts
	I0930 21:08:29.295497   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:08:29.295510   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:08:29.295614   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:08:29.295743   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:08:29.295754   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:08:29.295782   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:08:29.295855   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:08:29.295864   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:08:29.295886   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:08:29.295948   73256 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.embed-certs-256103 san=[127.0.0.1 192.168.39.90 embed-certs-256103 localhost minikube]
	I0930 21:08:26.468058   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:28.468510   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:26.808360   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:29.307500   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:29.742069   73256 provision.go:177] copyRemoteCerts
	I0930 21:08:29.742134   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:08:29.742156   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.745411   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.745805   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.745835   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.746023   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.746215   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.746351   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.746557   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:29.833888   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:08:29.857756   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0930 21:08:29.883087   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:08:29.905795   73256 provision.go:87] duration metric: took 617.470984ms to configureAuth
	I0930 21:08:29.905831   73256 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:08:29.906028   73256 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:08:29.906098   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.908911   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.909307   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.909335   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.909524   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.909711   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.909876   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.909996   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.910157   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.910429   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.910454   73256 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:08:30.140191   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:08:30.140217   73256 machine.go:96] duration metric: took 1.224326296s to provisionDockerMachine
	I0930 21:08:30.140227   73256 start.go:293] postStartSetup for "embed-certs-256103" (driver="kvm2")
	I0930 21:08:30.140237   73256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:08:30.140252   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.140624   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:08:30.140648   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.143906   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.144300   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.144339   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.144498   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.144695   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.144846   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.145052   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.230069   73256 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:08:30.233845   73256 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:08:30.233868   73256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:08:30.233948   73256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:08:30.234050   73256 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:08:30.234168   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:08:30.243066   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:30.266197   73256 start.go:296] duration metric: took 125.955153ms for postStartSetup
	I0930 21:08:30.266234   73256 fix.go:56] duration metric: took 20.349643145s for fixHost
	I0930 21:08:30.266252   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.269025   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.269405   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.269433   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.269576   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.269784   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.269910   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.270042   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.270176   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:30.270380   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:30.270392   73256 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:08:30.380023   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730510.354607586
	
	I0930 21:08:30.380057   73256 fix.go:216] guest clock: 1727730510.354607586
	I0930 21:08:30.380067   73256 fix.go:229] Guest: 2024-09-30 21:08:30.354607586 +0000 UTC Remote: 2024-09-30 21:08:30.266237543 +0000 UTC m=+355.815232104 (delta=88.370043ms)
	I0930 21:08:30.380085   73256 fix.go:200] guest clock delta is within tolerance: 88.370043ms
	I0930 21:08:30.380091   73256 start.go:83] releasing machines lock for "embed-certs-256103", held for 20.463544222s
	I0930 21:08:30.380113   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.380429   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:30.382992   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.383349   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.383369   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.383518   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384071   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384245   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384310   73256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:08:30.384374   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.384442   73256 ssh_runner.go:195] Run: cat /version.json
	I0930 21:08:30.384464   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.387098   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387342   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387413   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.387435   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387633   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.387762   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.387783   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387828   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.387931   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.388003   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.388058   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.388159   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.388208   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.388347   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.510981   73256 ssh_runner.go:195] Run: systemctl --version
	I0930 21:08:30.517215   73256 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:08:30.663491   73256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:08:30.669568   73256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:08:30.669652   73256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:08:30.686640   73256 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:08:30.686663   73256 start.go:495] detecting cgroup driver to use...
	I0930 21:08:30.686737   73256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:08:30.703718   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:08:30.718743   73256 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:08:30.718807   73256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:08:30.733695   73256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:08:30.748690   73256 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:08:30.878084   73256 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:08:31.040955   73256 docker.go:233] disabling docker service ...
	I0930 21:08:31.041030   73256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:08:31.055212   73256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:08:31.067968   73256 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:08:31.185043   73256 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:08:31.300909   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:08:31.315167   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:08:31.333483   73256 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:08:31.333537   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.343599   73256 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:08:31.343694   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.353739   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.363993   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.375183   73256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:08:31.385478   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.395632   73256 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.412995   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.423277   73256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:08:31.433183   73256 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:08:31.433253   73256 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:08:31.446796   73256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:08:31.456912   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:31.571729   73256 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:08:31.663944   73256 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:08:31.664019   73256 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:08:31.669128   73256 start.go:563] Will wait 60s for crictl version
	I0930 21:08:31.669191   73256 ssh_runner.go:195] Run: which crictl
	I0930 21:08:31.672922   73256 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:08:31.709488   73256 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:08:31.709596   73256 ssh_runner.go:195] Run: crio --version
	I0930 21:08:31.738743   73256 ssh_runner.go:195] Run: crio --version
	I0930 21:08:31.771638   73256 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:08:27.922374   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.921870   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.421786   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.921804   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.421482   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.921969   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.422241   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.922148   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:32.421504   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.773186   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:31.776392   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:31.776770   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:31.776810   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:31.777016   73256 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 21:08:31.781212   73256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:31.793839   73256 kubeadm.go:883] updating cluster {Name:embed-certs-256103 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:08:31.793957   73256 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:08:31.794015   73256 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:31.834036   73256 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:08:31.834094   73256 ssh_runner.go:195] Run: which lz4
	I0930 21:08:31.837877   73256 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:08:31.842038   73256 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:08:31.842073   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 21:08:33.150975   73256 crio.go:462] duration metric: took 1.313131374s to copy over tarball
	I0930 21:08:33.151080   73256 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:08:30.469523   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:32.469562   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:34.969818   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:31.307560   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:33.308130   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:32.921516   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.421576   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.922082   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.421599   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.922178   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.422199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.922061   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.421860   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.921513   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.422162   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.294750   73256 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.143629494s)
	I0930 21:08:35.294785   73256 crio.go:469] duration metric: took 2.143777794s to extract the tarball
	I0930 21:08:35.294794   73256 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:08:35.340151   73256 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:35.385329   73256 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 21:08:35.385359   73256 cache_images.go:84] Images are preloaded, skipping loading
	I0930 21:08:35.385366   73256 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.31.1 crio true true} ...
	I0930 21:08:35.385463   73256 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-256103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:08:35.385536   73256 ssh_runner.go:195] Run: crio config
	I0930 21:08:35.433043   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:08:35.433072   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:35.433084   73256 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:08:35.433113   73256 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-256103 NodeName:embed-certs-256103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:08:35.433277   73256 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-256103"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:08:35.433348   73256 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:08:35.443627   73256 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:08:35.443713   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:08:35.453095   73256 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0930 21:08:35.469517   73256 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:08:35.486869   73256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0930 21:08:35.504871   73256 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0930 21:08:35.508507   73256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:35.521994   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:35.641971   73256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:35.657660   73256 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103 for IP: 192.168.39.90
	I0930 21:08:35.657686   73256 certs.go:194] generating shared ca certs ...
	I0930 21:08:35.657705   73256 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:35.657878   73256 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:08:35.657941   73256 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:08:35.657954   73256 certs.go:256] generating profile certs ...
	I0930 21:08:35.658095   73256 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/client.key
	I0930 21:08:35.658177   73256 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.key.52e83f0c
	I0930 21:08:35.658230   73256 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.key
	I0930 21:08:35.658391   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:08:35.658431   73256 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:08:35.658443   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:08:35.658476   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:08:35.658509   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:08:35.658539   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:08:35.658586   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:35.659279   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:08:35.695254   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:08:35.718948   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:08:35.742442   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:08:35.765859   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0930 21:08:35.792019   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:08:35.822081   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:08:35.845840   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:08:35.871635   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:08:35.896069   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:08:35.921595   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:08:35.946620   73256 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:08:35.963340   73256 ssh_runner.go:195] Run: openssl version
	I0930 21:08:35.970540   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:08:35.982269   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.987494   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.987646   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.994312   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:08:36.006173   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:08:36.017605   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.022126   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.022190   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.027806   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:08:36.038388   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:08:36.048818   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.053230   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.053296   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.058713   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:08:36.070806   73256 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:08:36.075521   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:08:36.081310   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:08:36.086935   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:08:36.092990   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:08:36.098783   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:08:36.104354   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:08:36.110289   73256 kubeadm.go:392] StartCluster: {Name:embed-certs-256103 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:08:36.110411   73256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:08:36.110495   73256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:36.153770   73256 cri.go:89] found id: ""
	I0930 21:08:36.153852   73256 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:08:36.164301   73256 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:08:36.164320   73256 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:08:36.164363   73256 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:08:36.173860   73256 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:08:36.174950   73256 kubeconfig.go:125] found "embed-certs-256103" server: "https://192.168.39.90:8443"
	I0930 21:08:36.177584   73256 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:08:36.186946   73256 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I0930 21:08:36.186984   73256 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:08:36.186998   73256 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:08:36.187045   73256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:36.223259   73256 cri.go:89] found id: ""
	I0930 21:08:36.223328   73256 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:08:36.239321   73256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:08:36.248508   73256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:08:36.248528   73256 kubeadm.go:157] found existing configuration files:
	
	I0930 21:08:36.248571   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:08:36.257483   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:08:36.257537   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:08:36.266792   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:08:36.275626   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:08:36.275697   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:08:36.285000   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:08:36.293923   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:08:36.293977   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:08:36.303990   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:08:36.313104   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:08:36.313158   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:08:36.322423   73256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:08:36.332005   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:36.457666   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.309316   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.533114   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.602999   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.692027   73256 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:08:37.692117   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.192813   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.692777   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.192862   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.469941   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:39.506753   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:35.311295   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:37.806923   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:39.808338   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:37.921497   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.422360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.922305   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.422480   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.922279   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.422089   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.922021   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.421727   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.921519   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:42.422193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.692193   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.192178   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.209649   73256 api_server.go:72] duration metric: took 2.517618424s to wait for apiserver process to appear ...
	I0930 21:08:40.209676   73256 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:08:40.209699   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.034828   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:43.034857   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:43.034871   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.080073   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:43.080107   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:43.210448   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.217768   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:43.217799   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:43.710066   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.722379   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:43.722428   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:44.209939   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:44.219468   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:44.219500   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:44.709767   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:44.714130   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I0930 21:08:44.720194   73256 api_server.go:141] control plane version: v1.31.1
	I0930 21:08:44.720221   73256 api_server.go:131] duration metric: took 4.510539442s to wait for apiserver health ...
	I0930 21:08:44.720230   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:08:44.720236   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:44.721740   73256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:08:41.968377   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:44.469477   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:41.808473   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:43.808575   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:42.922495   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.422250   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.922413   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.421962   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.921682   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.422144   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.922206   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.422020   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.921960   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:47.422296   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.722947   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:08:44.733426   73256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:08:44.750426   73256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:08:44.761259   73256 system_pods.go:59] 8 kube-system pods found
	I0930 21:08:44.761303   73256 system_pods.go:61] "coredns-7c65d6cfc9-h6cl2" [548e3751-edc9-4232-87c2-2e64769ba332] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:08:44.761314   73256 system_pods.go:61] "etcd-embed-certs-256103" [6eef2e96-d4bf-4dd6-bd5c-bfb05c306182] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:08:44.761326   73256 system_pods.go:61] "kube-apiserver-embed-certs-256103" [81c02a52-aca7-4b9c-b7b1-680d27f48d40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:08:44.761335   73256 system_pods.go:61] "kube-controller-manager-embed-certs-256103" [752f0966-7718-4523-8ba6-affd41bc956e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:08:44.761346   73256 system_pods.go:61] "kube-proxy-fqvg2" [284a63a1-d624-4bf3-8509-14ff0845f3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 21:08:44.761354   73256 system_pods.go:61] "kube-scheduler-embed-certs-256103" [6158a51d-82ae-490a-96d3-c0e61a3485f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:08:44.761363   73256 system_pods.go:61] "metrics-server-6867b74b74-hkp9m" [8774a772-bb72-4419-96fd-50ca5f48a5b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:08:44.761374   73256 system_pods.go:61] "storage-provisioner" [9649e71d-cd21-4846-bf66-1c5b469500ba] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 21:08:44.761385   73256 system_pods.go:74] duration metric: took 10.935916ms to wait for pod list to return data ...
	I0930 21:08:44.761397   73256 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:08:44.771745   73256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:08:44.771777   73256 node_conditions.go:123] node cpu capacity is 2
	I0930 21:08:44.771789   73256 node_conditions.go:105] duration metric: took 10.386814ms to run NodePressure ...
	I0930 21:08:44.771810   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:45.064019   73256 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:08:45.070479   73256 kubeadm.go:739] kubelet initialised
	I0930 21:08:45.070508   73256 kubeadm.go:740] duration metric: took 6.461143ms waiting for restarted kubelet to initialise ...
	I0930 21:08:45.070517   73256 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:45.074627   73256 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.080873   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.080897   73256 pod_ready.go:82] duration metric: took 6.244301ms for pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.080906   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.080912   73256 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.086787   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "etcd-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.086818   73256 pod_ready.go:82] duration metric: took 5.898265ms for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.086829   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "etcd-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.086837   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.092860   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.092892   73256 pod_ready.go:82] duration metric: took 6.044766ms for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.092904   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.092912   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.154246   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.154271   73256 pod_ready.go:82] duration metric: took 61.348653ms for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.154281   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.154287   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fqvg2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.554606   73256 pod_ready.go:93] pod "kube-proxy-fqvg2" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:45.554630   73256 pod_ready.go:82] duration metric: took 400.335084ms for pod "kube-proxy-fqvg2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.554639   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:47.559998   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:46.968101   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:48.968649   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:46.307946   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:48.806624   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:47.921903   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.422535   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.921484   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.421909   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.922117   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.421606   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.921728   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.421600   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.921716   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:52.421873   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.561176   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:51.562227   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:54.060692   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:51.467375   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:53.473247   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:50.807821   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:53.307163   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:52.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.921496   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.421866   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.921995   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.421476   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.421660   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.922489   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:57.422291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.562740   73256 pod_ready.go:93] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:54.562765   73256 pod_ready.go:82] duration metric: took 9.008120147s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:54.562775   73256 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:56.570517   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:59.070065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:55.969724   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:58.467585   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:55.807669   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:58.305837   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:57.921737   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.922007   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.422173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.921803   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.421596   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.922123   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.422186   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.921898   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:02.421894   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.070940   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:03.569053   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:00.469160   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.968692   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:00.308195   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.807474   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:04.808710   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.922329   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.421922   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.922360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.421875   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.922544   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.421939   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.921693   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.422056   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.921627   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:07.422125   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.070166   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:08.568945   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:05.467300   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.469409   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:09.968053   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.306237   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:09.306644   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.921687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.421694   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.922234   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.421817   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.921704   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.422030   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.921597   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.421700   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.922301   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:12.421567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.569444   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:13.069582   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:11.970180   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:14.469440   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:11.307287   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:13.307376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:12.922171   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.422423   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.921941   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.422494   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.922454   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.421776   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.922567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.421713   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.922449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:17.421644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.569398   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:18.069177   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:16.968663   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:19.468171   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:15.808689   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:18.307774   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:17.922098   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.922084   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.421717   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.922095   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:19.922178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:19.962975   73900 cri.go:89] found id: ""
	I0930 21:09:19.963002   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.963014   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:19.963020   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:19.963073   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:19.999741   73900 cri.go:89] found id: ""
	I0930 21:09:19.999769   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.999777   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:19.999782   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:19.999840   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:20.035818   73900 cri.go:89] found id: ""
	I0930 21:09:20.035844   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.035856   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:20.035863   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:20.035924   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:20.072005   73900 cri.go:89] found id: ""
	I0930 21:09:20.072032   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.072042   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:20.072048   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:20.072110   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:20.108229   73900 cri.go:89] found id: ""
	I0930 21:09:20.108258   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.108314   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:20.108325   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:20.108383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:20.141331   73900 cri.go:89] found id: ""
	I0930 21:09:20.141388   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.141398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:20.141406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:20.141466   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:20.175133   73900 cri.go:89] found id: ""
	I0930 21:09:20.175161   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.175169   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:20.175175   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:20.175223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:20.210529   73900 cri.go:89] found id: ""
	I0930 21:09:20.210566   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.210578   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:20.210594   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:20.210608   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:20.261055   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:20.261095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:20.274212   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:20.274239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:20.406215   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:20.406246   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:20.406282   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:20.481758   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:20.481794   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:20.069672   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:22.569421   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:21.468616   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:23.468820   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:20.309317   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:22.807149   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:24.807293   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:23.019687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:23.033394   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:23.033450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:23.078558   73900 cri.go:89] found id: ""
	I0930 21:09:23.078592   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.078604   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:23.078611   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:23.078673   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:23.117833   73900 cri.go:89] found id: ""
	I0930 21:09:23.117860   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.117868   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:23.117875   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:23.117931   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:23.157299   73900 cri.go:89] found id: ""
	I0930 21:09:23.157337   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.157359   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:23.157367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:23.157438   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:23.196545   73900 cri.go:89] found id: ""
	I0930 21:09:23.196570   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.196579   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:23.196586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:23.196644   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:23.229359   73900 cri.go:89] found id: ""
	I0930 21:09:23.229390   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.229401   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:23.229409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:23.229471   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:23.264847   73900 cri.go:89] found id: ""
	I0930 21:09:23.264881   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.264893   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:23.264900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:23.264962   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:23.298657   73900 cri.go:89] found id: ""
	I0930 21:09:23.298687   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.298695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:23.298701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:23.298750   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:23.333787   73900 cri.go:89] found id: ""
	I0930 21:09:23.333816   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.333826   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:23.333836   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:23.333851   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:23.386311   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:23.386347   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:23.400096   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:23.400129   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:23.481724   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:23.481748   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:23.481780   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:23.561080   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:23.561119   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.122460   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:26.136409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:26.136495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:26.170785   73900 cri.go:89] found id: ""
	I0930 21:09:26.170818   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.170832   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:26.170866   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:26.170945   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:26.205211   73900 cri.go:89] found id: ""
	I0930 21:09:26.205265   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.205275   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:26.205281   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:26.205335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:26.239242   73900 cri.go:89] found id: ""
	I0930 21:09:26.239276   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.239285   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:26.239291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:26.239337   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:26.272908   73900 cri.go:89] found id: ""
	I0930 21:09:26.272932   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.272940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:26.272946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:26.272993   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:26.311599   73900 cri.go:89] found id: ""
	I0930 21:09:26.311625   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.311632   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:26.311639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:26.311684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:26.345719   73900 cri.go:89] found id: ""
	I0930 21:09:26.345746   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.345754   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:26.345760   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:26.345816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:26.383513   73900 cri.go:89] found id: ""
	I0930 21:09:26.383562   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.383572   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:26.383578   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:26.383637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:26.418533   73900 cri.go:89] found id: ""
	I0930 21:09:26.418565   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.418574   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:26.418584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:26.418594   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.456635   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:26.456660   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:26.507639   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:26.507686   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:26.521069   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:26.521095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:26.594745   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:26.594768   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:26.594781   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:24.569626   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:26.570133   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.069071   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:25.968851   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:27.974091   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:26.808336   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.308328   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.180142   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:29.194730   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:29.194785   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:29.234054   73900 cri.go:89] found id: ""
	I0930 21:09:29.234094   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.234103   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:29.234109   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:29.234156   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:29.280869   73900 cri.go:89] found id: ""
	I0930 21:09:29.280896   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.280907   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:29.280914   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:29.280988   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:29.348376   73900 cri.go:89] found id: ""
	I0930 21:09:29.348406   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.348417   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:29.348424   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:29.348491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:29.404218   73900 cri.go:89] found id: ""
	I0930 21:09:29.404251   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.404261   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:29.404268   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:29.404344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:29.449029   73900 cri.go:89] found id: ""
	I0930 21:09:29.449053   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.449061   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:29.449066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:29.449127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:29.484917   73900 cri.go:89] found id: ""
	I0930 21:09:29.484939   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.484948   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:29.484954   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:29.485002   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:29.517150   73900 cri.go:89] found id: ""
	I0930 21:09:29.517177   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.517185   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:29.517191   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:29.517259   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:29.550410   73900 cri.go:89] found id: ""
	I0930 21:09:29.550443   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.550452   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:29.550461   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:29.550472   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:29.601757   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:29.601803   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:29.616266   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:29.616299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:29.686206   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:29.686228   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:29.686240   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:29.761765   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:29.761810   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:32.299199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:32.315047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:32.315125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:32.349784   73900 cri.go:89] found id: ""
	I0930 21:09:32.349810   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.349819   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:32.349824   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:32.349871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:32.385887   73900 cri.go:89] found id: ""
	I0930 21:09:32.385916   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.385927   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:32.385935   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:32.385994   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:32.421746   73900 cri.go:89] found id: ""
	I0930 21:09:32.421776   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.421789   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:32.421796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:32.421856   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:32.459361   73900 cri.go:89] found id: ""
	I0930 21:09:32.459391   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.459404   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:32.459411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:32.459470   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:32.495919   73900 cri.go:89] found id: ""
	I0930 21:09:32.495947   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.495960   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:32.495966   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:32.496025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:32.533626   73900 cri.go:89] found id: ""
	I0930 21:09:32.533652   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.533663   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:32.533670   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:32.533729   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:32.567577   73900 cri.go:89] found id: ""
	I0930 21:09:32.567610   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.567623   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:32.567630   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:32.567687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:32.604949   73900 cri.go:89] found id: ""
	I0930 21:09:32.604981   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.604991   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:32.605001   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:32.605014   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:32.656781   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:32.656822   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:32.670116   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:32.670144   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:32.736712   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:32.736736   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:32.736751   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:31.070228   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:33.569488   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:30.469162   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:32.469874   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:34.967596   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:31.807682   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:33.807723   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:32.813502   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:32.813556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:35.354372   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:35.369226   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:35.369303   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:35.408374   73900 cri.go:89] found id: ""
	I0930 21:09:35.408402   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.408414   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:35.408421   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:35.408481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:35.442390   73900 cri.go:89] found id: ""
	I0930 21:09:35.442432   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.442440   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:35.442445   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:35.442524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:35.479624   73900 cri.go:89] found id: ""
	I0930 21:09:35.479651   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.479659   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:35.479664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:35.479711   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:35.518580   73900 cri.go:89] found id: ""
	I0930 21:09:35.518609   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.518617   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:35.518623   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:35.518675   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:35.553547   73900 cri.go:89] found id: ""
	I0930 21:09:35.553582   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.553590   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:35.553604   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:35.553669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:35.596444   73900 cri.go:89] found id: ""
	I0930 21:09:35.596476   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.596487   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:35.596495   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:35.596583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:35.634232   73900 cri.go:89] found id: ""
	I0930 21:09:35.634259   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.634268   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:35.634274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:35.634322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:35.669637   73900 cri.go:89] found id: ""
	I0930 21:09:35.669672   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.669683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:35.669694   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:35.669706   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:35.719433   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:35.719469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:35.733383   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:35.733415   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:35.811860   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:35.811887   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:35.811913   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:35.896206   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:35.896272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:35.569694   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:37.570548   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:36.968789   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.968959   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:35.814006   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.306676   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.435999   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:38.450091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:38.450152   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:38.489127   73900 cri.go:89] found id: ""
	I0930 21:09:38.489153   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.489161   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:38.489166   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:38.489221   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:38.520760   73900 cri.go:89] found id: ""
	I0930 21:09:38.520783   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.520792   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:38.520798   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:38.520847   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:38.556279   73900 cri.go:89] found id: ""
	I0930 21:09:38.556306   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.556315   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:38.556319   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:38.556379   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:38.590804   73900 cri.go:89] found id: ""
	I0930 21:09:38.590827   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.590834   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:38.590840   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:38.590906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:38.624765   73900 cri.go:89] found id: ""
	I0930 21:09:38.624792   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.624800   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:38.624805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:38.624857   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:38.660587   73900 cri.go:89] found id: ""
	I0930 21:09:38.660614   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.660625   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:38.660635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:38.660702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:38.693314   73900 cri.go:89] found id: ""
	I0930 21:09:38.693352   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.693362   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:38.693371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:38.693441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:38.729163   73900 cri.go:89] found id: ""
	I0930 21:09:38.729197   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.729212   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:38.729223   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:38.729235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:38.780787   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:38.780828   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:38.794983   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:38.795009   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:38.861886   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:38.861911   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:38.861926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:38.936958   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:38.936994   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:41.479891   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:41.493041   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:41.493106   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:41.528855   73900 cri.go:89] found id: ""
	I0930 21:09:41.528889   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.528900   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:41.528906   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:41.528967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:41.565193   73900 cri.go:89] found id: ""
	I0930 21:09:41.565216   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.565224   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:41.565230   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:41.565289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:41.599503   73900 cri.go:89] found id: ""
	I0930 21:09:41.599538   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.599547   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:41.599553   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:41.599611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:41.636623   73900 cri.go:89] found id: ""
	I0930 21:09:41.636651   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.636663   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:41.636671   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:41.636728   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:41.671727   73900 cri.go:89] found id: ""
	I0930 21:09:41.671753   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.671760   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:41.671765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:41.671819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:41.705499   73900 cri.go:89] found id: ""
	I0930 21:09:41.705533   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.705543   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:41.705549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:41.705602   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:41.738262   73900 cri.go:89] found id: ""
	I0930 21:09:41.738285   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.738292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:41.738297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:41.738351   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:41.774232   73900 cri.go:89] found id: ""
	I0930 21:09:41.774261   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.774269   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:41.774277   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:41.774288   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:41.826060   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:41.826093   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:41.839308   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:41.839335   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:41.908599   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:41.908626   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:41.908640   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:41.986337   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:41.986375   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:40.069900   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:42.070035   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:41.469908   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:43.968111   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:40.307200   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:42.308356   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:44.807663   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:44.527015   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:44.539973   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:44.540036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:44.575985   73900 cri.go:89] found id: ""
	I0930 21:09:44.576012   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.576021   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:44.576027   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:44.576076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:44.612693   73900 cri.go:89] found id: ""
	I0930 21:09:44.612724   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.612736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:44.612743   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:44.612809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:44.646515   73900 cri.go:89] found id: ""
	I0930 21:09:44.646544   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.646555   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:44.646562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:44.646623   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:44.679980   73900 cri.go:89] found id: ""
	I0930 21:09:44.680011   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.680022   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:44.680030   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:44.680089   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:44.714078   73900 cri.go:89] found id: ""
	I0930 21:09:44.714117   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.714128   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:44.714135   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:44.714193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:44.748491   73900 cri.go:89] found id: ""
	I0930 21:09:44.748521   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.748531   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:44.748539   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:44.748618   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:44.780902   73900 cri.go:89] found id: ""
	I0930 21:09:44.780936   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.780947   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:44.780955   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:44.781013   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:44.817944   73900 cri.go:89] found id: ""
	I0930 21:09:44.817999   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.818011   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:44.818022   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:44.818038   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:44.873896   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:44.873926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:44.887829   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:44.887858   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:44.957562   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:44.957584   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:44.957598   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:45.037892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:45.037934   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:47.583013   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:47.595799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:47.595870   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:47.630348   73900 cri.go:89] found id: ""
	I0930 21:09:47.630377   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.630385   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:47.630391   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:47.630444   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:47.663416   73900 cri.go:89] found id: ""
	I0930 21:09:47.663440   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.663448   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:47.663454   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:47.663500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:47.700145   73900 cri.go:89] found id: ""
	I0930 21:09:47.700174   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.700184   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:47.700192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:47.700253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:47.732539   73900 cri.go:89] found id: ""
	I0930 21:09:47.732567   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.732577   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:47.732583   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:47.732637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:44.569951   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:46.570501   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:48.574018   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:45.971063   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:48.468661   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:47.307709   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:49.806843   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:47.764470   73900 cri.go:89] found id: ""
	I0930 21:09:47.764493   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.764501   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:47.764507   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:47.764553   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:47.802365   73900 cri.go:89] found id: ""
	I0930 21:09:47.802393   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.802403   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:47.802411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:47.802468   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:47.836504   73900 cri.go:89] found id: ""
	I0930 21:09:47.836531   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.836542   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:47.836549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:47.836611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:47.870315   73900 cri.go:89] found id: ""
	I0930 21:09:47.870338   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.870351   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:47.870359   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:47.870370   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:47.919974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:47.920011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:47.934157   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:47.934190   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:48.003046   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:48.003072   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:48.003085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:48.084947   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:48.084985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:50.624791   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:50.638118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:50.638196   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:50.672448   73900 cri.go:89] found id: ""
	I0930 21:09:50.672479   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.672488   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:50.672503   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:50.672557   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:50.706057   73900 cri.go:89] found id: ""
	I0930 21:09:50.706080   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.706088   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:50.706093   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:50.706142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:50.738101   73900 cri.go:89] found id: ""
	I0930 21:09:50.738126   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.738134   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:50.738140   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:50.738207   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:50.772483   73900 cri.go:89] found id: ""
	I0930 21:09:50.772508   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.772516   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:50.772522   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:50.772581   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:50.805169   73900 cri.go:89] found id: ""
	I0930 21:09:50.805200   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.805211   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:50.805220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:50.805276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:50.842144   73900 cri.go:89] found id: ""
	I0930 21:09:50.842168   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.842176   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:50.842182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:50.842236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:50.875512   73900 cri.go:89] found id: ""
	I0930 21:09:50.875563   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.875575   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:50.875582   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:50.875643   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:50.909549   73900 cri.go:89] found id: ""
	I0930 21:09:50.909580   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.909591   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:50.909599   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:50.909610   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:50.962064   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:50.962098   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:50.976979   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:50.977012   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:51.053784   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:51.053815   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:51.053833   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:51.130939   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:51.130975   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:51.069919   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:53.568708   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:50.468737   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:52.968935   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:52.306733   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:54.306875   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:53.667675   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:53.680381   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:53.680449   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:53.712759   73900 cri.go:89] found id: ""
	I0930 21:09:53.712791   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.712800   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:53.712807   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:53.712871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:53.748958   73900 cri.go:89] found id: ""
	I0930 21:09:53.748990   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.749002   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:53.749009   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:53.749078   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:53.783243   73900 cri.go:89] found id: ""
	I0930 21:09:53.783272   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.783282   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:53.783289   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:53.783382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:53.823848   73900 cri.go:89] found id: ""
	I0930 21:09:53.823875   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.823883   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:53.823890   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:53.823941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:53.865607   73900 cri.go:89] found id: ""
	I0930 21:09:53.865635   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.865643   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:53.865648   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:53.865693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:53.900888   73900 cri.go:89] found id: ""
	I0930 21:09:53.900912   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.900920   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:53.900926   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:53.900985   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:53.933688   73900 cri.go:89] found id: ""
	I0930 21:09:53.933717   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.933728   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:53.933736   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:53.933798   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:53.968702   73900 cri.go:89] found id: ""
	I0930 21:09:53.968731   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.968740   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:53.968749   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:53.968760   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:54.021588   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:54.021626   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:54.036681   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:54.036719   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:54.112189   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:54.112209   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:54.112223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:54.185028   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:54.185085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:56.725146   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:56.739358   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:56.739421   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:56.779278   73900 cri.go:89] found id: ""
	I0930 21:09:56.779313   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.779322   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:56.779329   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:56.779377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:56.815972   73900 cri.go:89] found id: ""
	I0930 21:09:56.816000   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.816011   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:56.816018   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:56.816084   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:56.849425   73900 cri.go:89] found id: ""
	I0930 21:09:56.849458   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.849471   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:56.849478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:56.849542   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:56.885483   73900 cri.go:89] found id: ""
	I0930 21:09:56.885510   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.885520   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:56.885527   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:56.885586   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:56.917832   73900 cri.go:89] found id: ""
	I0930 21:09:56.917862   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.917872   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:56.917879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:56.917932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:56.951613   73900 cri.go:89] found id: ""
	I0930 21:09:56.951643   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.951654   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:56.951664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:56.951726   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:56.987577   73900 cri.go:89] found id: ""
	I0930 21:09:56.987608   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.987620   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:56.987628   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:56.987691   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:57.024871   73900 cri.go:89] found id: ""
	I0930 21:09:57.024903   73900 logs.go:276] 0 containers: []
	W0930 21:09:57.024912   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:57.024920   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:57.024935   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:57.038279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:57.038309   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:57.111955   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:57.111985   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:57.111998   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:57.193719   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:57.193755   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:57.230058   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:57.230085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:55.568928   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:58.069462   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:55.467583   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:57.968380   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:59.969131   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:56.807753   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:58.808055   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:59.780762   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:59.794210   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:59.794277   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:59.828258   73900 cri.go:89] found id: ""
	I0930 21:09:59.828287   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.828298   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:59.828306   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:59.828369   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:59.868295   73900 cri.go:89] found id: ""
	I0930 21:09:59.868331   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.868353   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:59.868363   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:59.868437   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:59.900298   73900 cri.go:89] found id: ""
	I0930 21:09:59.900326   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.900337   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:59.900343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:59.900403   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:59.934081   73900 cri.go:89] found id: ""
	I0930 21:09:59.934108   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.934120   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:59.934127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:59.934183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:59.970564   73900 cri.go:89] found id: ""
	I0930 21:09:59.970592   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.970600   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:59.970605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:59.970652   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:00.006215   73900 cri.go:89] found id: ""
	I0930 21:10:00.006249   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.006259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:00.006270   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:00.006348   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:00.040106   73900 cri.go:89] found id: ""
	I0930 21:10:00.040135   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.040144   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:00.040150   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:00.040202   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:00.079310   73900 cri.go:89] found id: ""
	I0930 21:10:00.079345   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.079354   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:00.079365   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:00.079378   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:00.161243   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:00.161284   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:00.198911   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:00.198941   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:00.247697   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:00.247735   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:00.260905   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:00.260933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:00.332502   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:00.569218   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.569371   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.468439   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:04.968585   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:00.808753   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:03.306574   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.833204   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:02.846807   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:02.846893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:02.882386   73900 cri.go:89] found id: ""
	I0930 21:10:02.882420   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.882431   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:02.882439   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:02.882504   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:02.918589   73900 cri.go:89] found id: ""
	I0930 21:10:02.918617   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.918633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:02.918642   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:02.918722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:02.952758   73900 cri.go:89] found id: ""
	I0930 21:10:02.952789   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.952799   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:02.952806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:02.952871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:02.991406   73900 cri.go:89] found id: ""
	I0930 21:10:02.991439   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.991448   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:02.991454   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:02.991511   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:03.030075   73900 cri.go:89] found id: ""
	I0930 21:10:03.030104   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.030112   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:03.030121   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:03.030172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:03.063630   73900 cri.go:89] found id: ""
	I0930 21:10:03.063654   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.063662   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:03.063668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:03.063718   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:03.098607   73900 cri.go:89] found id: ""
	I0930 21:10:03.098636   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.098644   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:03.098649   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:03.098702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:03.133161   73900 cri.go:89] found id: ""
	I0930 21:10:03.133189   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.133198   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:03.133206   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:03.133217   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:03.211046   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:03.211083   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:03.252585   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:03.252615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:03.307019   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:03.307049   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:03.320781   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:03.320811   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:03.408645   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:05.909638   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:05.922674   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:05.922744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:05.955264   73900 cri.go:89] found id: ""
	I0930 21:10:05.955305   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.955318   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:05.955326   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:05.955378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:05.991055   73900 cri.go:89] found id: ""
	I0930 21:10:05.991100   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.991122   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:05.991130   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:05.991194   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:06.025725   73900 cri.go:89] found id: ""
	I0930 21:10:06.025755   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.025766   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:06.025773   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:06.025832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:06.067700   73900 cri.go:89] found id: ""
	I0930 21:10:06.067726   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.067736   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:06.067743   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:06.067801   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:06.102729   73900 cri.go:89] found id: ""
	I0930 21:10:06.102760   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.102771   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:06.102784   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:06.102845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:06.137120   73900 cri.go:89] found id: ""
	I0930 21:10:06.137148   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.137159   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:06.137164   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:06.137215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:06.169985   73900 cri.go:89] found id: ""
	I0930 21:10:06.170014   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.170023   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:06.170029   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:06.170082   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:06.206928   73900 cri.go:89] found id: ""
	I0930 21:10:06.206951   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.206959   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:06.206967   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:06.206977   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:06.258835   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:06.258870   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:06.273527   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:06.273556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:06.351335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:06.351359   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:06.351373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:06.423412   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:06.423450   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:04.569756   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:07.069437   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:09.074024   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:06.969500   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:09.471298   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:05.807932   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:08.306749   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:08.968986   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:08.984075   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:08.984139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:09.016815   73900 cri.go:89] found id: ""
	I0930 21:10:09.016847   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.016858   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:09.016864   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:09.016928   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:09.051603   73900 cri.go:89] found id: ""
	I0930 21:10:09.051626   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.051633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:09.051639   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:09.051693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:09.088820   73900 cri.go:89] found id: ""
	I0930 21:10:09.088856   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.088870   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:09.088884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:09.088949   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:09.124032   73900 cri.go:89] found id: ""
	I0930 21:10:09.124064   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.124076   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:09.124083   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:09.124140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:09.177129   73900 cri.go:89] found id: ""
	I0930 21:10:09.177161   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.177172   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:09.177178   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:09.177228   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:09.211490   73900 cri.go:89] found id: ""
	I0930 21:10:09.211513   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.211521   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:09.211540   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:09.211605   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:09.252187   73900 cri.go:89] found id: ""
	I0930 21:10:09.252211   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.252221   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:09.252229   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:09.252289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:09.286970   73900 cri.go:89] found id: ""
	I0930 21:10:09.287004   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.287012   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:09.287020   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:09.287031   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:09.369387   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:09.369410   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:09.369422   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:09.450685   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:09.450733   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:09.491302   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:09.491331   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:09.540183   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:09.540219   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.054793   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:12.068635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:12.068717   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:12.103118   73900 cri.go:89] found id: ""
	I0930 21:10:12.103140   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.103149   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:12.103154   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:12.103219   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:12.137992   73900 cri.go:89] found id: ""
	I0930 21:10:12.138020   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.138031   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:12.138040   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:12.138103   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:12.175559   73900 cri.go:89] found id: ""
	I0930 21:10:12.175591   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.175609   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:12.175616   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:12.175678   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:12.209630   73900 cri.go:89] found id: ""
	I0930 21:10:12.209655   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.209666   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:12.209672   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:12.209735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:12.245844   73900 cri.go:89] found id: ""
	I0930 21:10:12.245879   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.245891   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:12.245901   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:12.245961   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:12.280385   73900 cri.go:89] found id: ""
	I0930 21:10:12.280412   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.280420   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:12.280426   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:12.280484   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:12.315424   73900 cri.go:89] found id: ""
	I0930 21:10:12.315453   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.315463   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:12.315473   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:12.315566   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:12.349223   73900 cri.go:89] found id: ""
	I0930 21:10:12.349251   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.349270   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:12.349279   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:12.349291   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.362360   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:12.362397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:12.432060   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:12.432084   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:12.432101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:12.506059   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:12.506096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:12.541319   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:12.541348   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:11.568740   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:13.569690   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:11.968234   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:13.968634   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:10.306903   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:12.307072   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:14.807562   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:15.098852   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:15.111919   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:15.112001   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:15.149174   73900 cri.go:89] found id: ""
	I0930 21:10:15.149206   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.149216   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:15.149223   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:15.149286   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:15.187283   73900 cri.go:89] found id: ""
	I0930 21:10:15.187316   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.187326   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:15.187333   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:15.187392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:15.223896   73900 cri.go:89] found id: ""
	I0930 21:10:15.223922   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.223933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:15.223940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:15.224000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:15.260530   73900 cri.go:89] found id: ""
	I0930 21:10:15.260559   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.260567   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:15.260573   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:15.260634   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:15.296319   73900 cri.go:89] found id: ""
	I0930 21:10:15.296346   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.296357   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:15.296363   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:15.296425   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:15.333785   73900 cri.go:89] found id: ""
	I0930 21:10:15.333830   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.333843   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:15.333856   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:15.333932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:15.368235   73900 cri.go:89] found id: ""
	I0930 21:10:15.368268   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.368280   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:15.368288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:15.368354   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:15.408155   73900 cri.go:89] found id: ""
	I0930 21:10:15.408184   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.408192   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:15.408200   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:15.408210   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:15.462018   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:15.462058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:15.477345   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:15.477376   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:15.558398   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:15.558423   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:15.558442   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:15.662269   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:15.662311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:15.569988   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.069056   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:16.467859   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.468764   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:17.307469   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:19.809316   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.199477   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:18.213235   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:18.213320   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:18.250379   73900 cri.go:89] found id: ""
	I0930 21:10:18.250409   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.250418   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:18.250424   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:18.250515   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:18.283381   73900 cri.go:89] found id: ""
	I0930 21:10:18.283407   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.283416   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:18.283422   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:18.283482   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:18.321601   73900 cri.go:89] found id: ""
	I0930 21:10:18.321635   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.321646   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:18.321659   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:18.321720   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:18.354210   73900 cri.go:89] found id: ""
	I0930 21:10:18.354242   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.354254   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:18.354262   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:18.354330   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:18.391982   73900 cri.go:89] found id: ""
	I0930 21:10:18.392019   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.392029   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:18.392035   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:18.392150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:18.428826   73900 cri.go:89] found id: ""
	I0930 21:10:18.428851   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.428862   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:18.428870   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:18.428927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:18.465841   73900 cri.go:89] found id: ""
	I0930 21:10:18.465868   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.465878   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:18.465887   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:18.465934   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:18.502747   73900 cri.go:89] found id: ""
	I0930 21:10:18.502775   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.502783   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:18.502793   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:18.502807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:18.558025   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:18.558064   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:18.572356   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:18.572383   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:18.642994   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:18.643020   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:18.643033   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:18.722804   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:18.722845   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.262790   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:21.276427   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:21.276510   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:21.323245   73900 cri.go:89] found id: ""
	I0930 21:10:21.323274   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.323284   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:21.323291   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:21.323377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:21.381684   73900 cri.go:89] found id: ""
	I0930 21:10:21.381725   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.381736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:21.381744   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:21.381813   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:21.428818   73900 cri.go:89] found id: ""
	I0930 21:10:21.428841   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.428849   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:21.428854   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:21.428901   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:21.462906   73900 cri.go:89] found id: ""
	I0930 21:10:21.462935   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.462944   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:21.462949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:21.462995   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:21.502417   73900 cri.go:89] found id: ""
	I0930 21:10:21.502452   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.502464   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:21.502471   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:21.502535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:21.540004   73900 cri.go:89] found id: ""
	I0930 21:10:21.540037   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.540048   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:21.540056   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:21.540105   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:21.574898   73900 cri.go:89] found id: ""
	I0930 21:10:21.574929   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.574937   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:21.574942   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:21.574999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:21.609438   73900 cri.go:89] found id: ""
	I0930 21:10:21.609465   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.609473   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:21.609496   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:21.609524   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.646651   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:21.646679   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:21.702406   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:21.702451   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:21.716226   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:21.716260   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:21.790089   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:21.790115   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:21.790128   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:20.070823   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.568856   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:20.968069   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.968208   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.307376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:24.808780   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:24.368291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:24.381517   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:24.381588   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:24.416535   73900 cri.go:89] found id: ""
	I0930 21:10:24.416559   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.416570   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:24.416577   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:24.416635   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:24.454444   73900 cri.go:89] found id: ""
	I0930 21:10:24.454472   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.454480   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:24.454485   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:24.454537   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:24.492334   73900 cri.go:89] found id: ""
	I0930 21:10:24.492359   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.492367   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:24.492373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:24.492419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:24.527590   73900 cri.go:89] found id: ""
	I0930 21:10:24.527622   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.527633   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:24.527642   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:24.527708   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:24.564819   73900 cri.go:89] found id: ""
	I0930 21:10:24.564844   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.564853   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:24.564858   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:24.564915   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:24.599367   73900 cri.go:89] found id: ""
	I0930 21:10:24.599390   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.599398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:24.599403   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:24.599450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:24.636738   73900 cri.go:89] found id: ""
	I0930 21:10:24.636767   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.636778   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:24.636785   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:24.636845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:24.669607   73900 cri.go:89] found id: ""
	I0930 21:10:24.669640   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.669651   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:24.669663   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:24.669680   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:24.722662   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:24.722696   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:24.736150   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:24.736179   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:24.812022   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:24.812053   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:24.812069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.891291   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:24.891330   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.430595   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:27.443990   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:27.444054   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:27.480204   73900 cri.go:89] found id: ""
	I0930 21:10:27.480230   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.480237   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:27.480243   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:27.480297   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:27.516959   73900 cri.go:89] found id: ""
	I0930 21:10:27.516982   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.516989   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:27.516995   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:27.517041   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:27.549717   73900 cri.go:89] found id: ""
	I0930 21:10:27.549745   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.549758   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:27.549769   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:27.549821   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:27.584512   73900 cri.go:89] found id: ""
	I0930 21:10:27.584539   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.584549   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:27.584560   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:27.584619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:27.623551   73900 cri.go:89] found id: ""
	I0930 21:10:27.623586   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.623603   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:27.623612   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:27.623679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:27.662453   73900 cri.go:89] found id: ""
	I0930 21:10:27.662478   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.662486   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:27.662493   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:27.662554   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:27.695665   73900 cri.go:89] found id: ""
	I0930 21:10:27.695693   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.695701   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:27.695707   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:27.695765   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:27.729090   73900 cri.go:89] found id: ""
	I0930 21:10:27.729129   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.729137   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:27.729146   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:27.729155   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.570129   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:26.572751   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.069340   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:25.468598   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.469443   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.970417   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.307766   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.806538   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.816186   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:27.816230   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.854451   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:27.854485   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:27.905674   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:27.905709   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:27.918889   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:27.918917   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:27.989739   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.490514   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:30.502735   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:30.502810   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:30.535874   73900 cri.go:89] found id: ""
	I0930 21:10:30.535902   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.535914   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:30.535922   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:30.535989   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:30.570603   73900 cri.go:89] found id: ""
	I0930 21:10:30.570627   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.570634   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:30.570643   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:30.570689   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:30.605225   73900 cri.go:89] found id: ""
	I0930 21:10:30.605255   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.605266   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:30.605273   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:30.605333   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:30.640810   73900 cri.go:89] found id: ""
	I0930 21:10:30.640839   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.640849   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:30.640857   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:30.640914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:30.673101   73900 cri.go:89] found id: ""
	I0930 21:10:30.673129   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.673137   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:30.673142   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:30.673189   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:30.704332   73900 cri.go:89] found id: ""
	I0930 21:10:30.704356   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.704366   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:30.704373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:30.704440   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:30.738463   73900 cri.go:89] found id: ""
	I0930 21:10:30.738494   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.738506   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:30.738516   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:30.738579   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:30.772115   73900 cri.go:89] found id: ""
	I0930 21:10:30.772153   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.772164   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:30.772175   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:30.772193   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:30.850683   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.850707   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:30.850720   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:30.930674   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:30.930718   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:30.975781   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:30.975819   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:31.030566   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:31.030613   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:31.070216   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.568935   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:32.468224   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:34.968557   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:31.807408   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.807669   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.544354   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:33.557613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:33.557692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:33.594372   73900 cri.go:89] found id: ""
	I0930 21:10:33.594394   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.594401   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:33.594406   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:33.594455   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:33.632026   73900 cri.go:89] found id: ""
	I0930 21:10:33.632048   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.632056   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:33.632061   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:33.632113   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:33.666168   73900 cri.go:89] found id: ""
	I0930 21:10:33.666201   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.666213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:33.666219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:33.666269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:33.697772   73900 cri.go:89] found id: ""
	I0930 21:10:33.697801   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.697810   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:33.697816   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:33.697864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:33.732821   73900 cri.go:89] found id: ""
	I0930 21:10:33.732851   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.732862   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:33.732869   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:33.732952   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:33.770646   73900 cri.go:89] found id: ""
	I0930 21:10:33.770682   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.770693   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:33.770701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:33.770756   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:33.804803   73900 cri.go:89] found id: ""
	I0930 21:10:33.804831   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.804842   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:33.804848   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:33.804921   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:33.838455   73900 cri.go:89] found id: ""
	I0930 21:10:33.838484   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.838495   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:33.838505   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:33.838523   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:33.879785   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:33.879812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:33.934586   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:33.934623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:33.948250   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:33.948293   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:34.023021   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:34.023054   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:34.023069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:36.604173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:36.616668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:36.616735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:36.650716   73900 cri.go:89] found id: ""
	I0930 21:10:36.650748   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.650757   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:36.650767   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:36.650833   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:36.685705   73900 cri.go:89] found id: ""
	I0930 21:10:36.685739   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.685751   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:36.685758   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:36.685819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:36.719895   73900 cri.go:89] found id: ""
	I0930 21:10:36.719922   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.719932   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:36.719939   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:36.720006   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:36.753123   73900 cri.go:89] found id: ""
	I0930 21:10:36.753148   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.753159   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:36.753166   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:36.753231   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:36.790023   73900 cri.go:89] found id: ""
	I0930 21:10:36.790054   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.790066   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:36.790073   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:36.790135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:36.825280   73900 cri.go:89] found id: ""
	I0930 21:10:36.825314   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.825324   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:36.825343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:36.825411   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:36.859028   73900 cri.go:89] found id: ""
	I0930 21:10:36.859053   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.859060   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:36.859066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:36.859125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:36.894952   73900 cri.go:89] found id: ""
	I0930 21:10:36.894980   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.894988   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:36.894996   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:36.895010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:36.968214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:36.968241   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:36.968256   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:37.047866   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:37.047903   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:37.088671   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:37.088705   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:37.144014   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:37.144058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:36.068920   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:38.069544   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:36.969475   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:39.469207   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:35.808654   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:38.306701   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:39.657874   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:39.671042   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:39.671100   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:39.706210   73900 cri.go:89] found id: ""
	I0930 21:10:39.706235   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.706243   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:39.706248   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:39.706295   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:39.743194   73900 cri.go:89] found id: ""
	I0930 21:10:39.743218   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.743226   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:39.743232   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:39.743280   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:39.780681   73900 cri.go:89] found id: ""
	I0930 21:10:39.780707   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.780715   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:39.780720   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:39.780774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:39.815841   73900 cri.go:89] found id: ""
	I0930 21:10:39.815865   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.815874   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:39.815879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:39.815933   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:39.849497   73900 cri.go:89] found id: ""
	I0930 21:10:39.849523   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.849534   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:39.849541   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:39.849603   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:39.883476   73900 cri.go:89] found id: ""
	I0930 21:10:39.883507   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.883519   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:39.883562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:39.883633   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:39.918300   73900 cri.go:89] found id: ""
	I0930 21:10:39.918329   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.918338   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:39.918343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:39.918392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:39.955751   73900 cri.go:89] found id: ""
	I0930 21:10:39.955780   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.955788   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:39.955795   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:39.955807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:40.010994   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:40.011035   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:40.025992   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:40.026022   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:40.097709   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:40.097731   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:40.097748   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:40.176790   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:40.176824   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:42.713838   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:42.729806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:42.729885   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:40.070503   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.568444   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:41.968357   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:44.469223   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:40.308072   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.807489   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.765449   73900 cri.go:89] found id: ""
	I0930 21:10:42.765483   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.765491   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:42.765498   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:42.765555   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:42.802556   73900 cri.go:89] found id: ""
	I0930 21:10:42.802584   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.802604   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:42.802612   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:42.802693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:42.836537   73900 cri.go:89] found id: ""
	I0930 21:10:42.836568   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.836585   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:42.836598   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:42.836662   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:42.870475   73900 cri.go:89] found id: ""
	I0930 21:10:42.870503   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.870511   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:42.870526   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:42.870589   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:42.907061   73900 cri.go:89] found id: ""
	I0930 21:10:42.907090   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.907098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:42.907103   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:42.907153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:42.941607   73900 cri.go:89] found id: ""
	I0930 21:10:42.941632   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.941640   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:42.941646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:42.941701   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:42.977073   73900 cri.go:89] found id: ""
	I0930 21:10:42.977097   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.977105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:42.977111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:42.977159   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:43.010838   73900 cri.go:89] found id: ""
	I0930 21:10:43.010859   73900 logs.go:276] 0 containers: []
	W0930 21:10:43.010867   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:43.010875   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:43.010886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:43.061264   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:43.061299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:43.075917   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:43.075950   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:43.137088   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:43.137111   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:43.137126   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:43.219393   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:43.219440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:45.761752   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:45.775864   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:45.775942   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:45.810693   73900 cri.go:89] found id: ""
	I0930 21:10:45.810724   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.810734   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:45.810740   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:45.810797   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:45.848360   73900 cri.go:89] found id: ""
	I0930 21:10:45.848399   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.848410   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:45.848418   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:45.848475   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:45.885504   73900 cri.go:89] found id: ""
	I0930 21:10:45.885550   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.885560   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:45.885565   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:45.885616   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:45.919747   73900 cri.go:89] found id: ""
	I0930 21:10:45.919776   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.919784   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:45.919789   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:45.919843   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:45.953787   73900 cri.go:89] found id: ""
	I0930 21:10:45.953820   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.953831   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:45.953839   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:45.953893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:45.990145   73900 cri.go:89] found id: ""
	I0930 21:10:45.990174   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.990184   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:45.990192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:45.990253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:46.023359   73900 cri.go:89] found id: ""
	I0930 21:10:46.023383   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.023391   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:46.023396   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:46.023447   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:46.057460   73900 cri.go:89] found id: ""
	I0930 21:10:46.057493   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.057504   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:46.057514   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:46.057533   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:46.097082   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:46.097109   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:46.147921   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:46.147960   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:46.161204   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:46.161232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:46.224308   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:46.224336   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:46.224351   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:44.568918   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:46.569353   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.569656   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:46.967674   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.967998   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:45.306917   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:47.806333   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:49.807846   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.805668   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:48.818569   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:48.818663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:48.856783   73900 cri.go:89] found id: ""
	I0930 21:10:48.856815   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.856827   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:48.856834   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:48.856896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:48.889185   73900 cri.go:89] found id: ""
	I0930 21:10:48.889217   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.889229   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:48.889236   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:48.889306   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:48.922013   73900 cri.go:89] found id: ""
	I0930 21:10:48.922041   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.922050   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:48.922055   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:48.922107   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:48.956818   73900 cri.go:89] found id: ""
	I0930 21:10:48.956848   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.956858   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:48.956866   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:48.956929   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:48.994942   73900 cri.go:89] found id: ""
	I0930 21:10:48.994975   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.994985   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:48.994991   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:48.995052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:49.031448   73900 cri.go:89] found id: ""
	I0930 21:10:49.031479   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.031491   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:49.031500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:49.031583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:49.066570   73900 cri.go:89] found id: ""
	I0930 21:10:49.066600   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.066608   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:49.066613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:49.066658   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:49.100952   73900 cri.go:89] found id: ""
	I0930 21:10:49.100981   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.100992   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:49.101000   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:49.101010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:49.176423   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:49.176458   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:49.212358   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:49.212387   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:49.263177   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:49.263227   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:49.275940   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:49.275969   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:49.346915   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:51.847761   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:51.860571   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:51.860646   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:51.894863   73900 cri.go:89] found id: ""
	I0930 21:10:51.894896   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.894906   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:51.894914   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:51.894978   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:51.927977   73900 cri.go:89] found id: ""
	I0930 21:10:51.928007   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.928018   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:51.928025   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:51.928083   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:51.962894   73900 cri.go:89] found id: ""
	I0930 21:10:51.962924   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.962933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:51.962940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:51.962999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:51.998453   73900 cri.go:89] found id: ""
	I0930 21:10:51.998482   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.998493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:51.998500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:51.998562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:52.033039   73900 cri.go:89] found id: ""
	I0930 21:10:52.033066   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.033075   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:52.033080   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:52.033139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:52.067222   73900 cri.go:89] found id: ""
	I0930 21:10:52.067254   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.067267   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:52.067274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:52.067341   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:52.102414   73900 cri.go:89] found id: ""
	I0930 21:10:52.102439   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.102448   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:52.102453   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:52.102498   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:52.135175   73900 cri.go:89] found id: ""
	I0930 21:10:52.135204   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.135214   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:52.135225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:52.135239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:52.185736   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:52.185779   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:52.198756   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:52.198792   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:52.264816   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:52.264847   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:52.264859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:52.347189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:52.347229   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:50.569765   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:53.068745   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:50.968885   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:52.970855   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:52.307245   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:54.308516   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:54.887502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:54.900067   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:54.900153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:54.939214   73900 cri.go:89] found id: ""
	I0930 21:10:54.939241   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.939249   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:54.939259   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:54.939313   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:54.973451   73900 cri.go:89] found id: ""
	I0930 21:10:54.973475   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.973483   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:54.973488   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:54.973541   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:55.007815   73900 cri.go:89] found id: ""
	I0930 21:10:55.007841   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.007850   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:55.007855   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:55.007914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:55.040861   73900 cri.go:89] found id: ""
	I0930 21:10:55.040891   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.040899   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:55.040905   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:55.040957   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:55.076053   73900 cri.go:89] found id: ""
	I0930 21:10:55.076086   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.076098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:55.076111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:55.076172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:55.108768   73900 cri.go:89] found id: ""
	I0930 21:10:55.108797   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.108807   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:55.108814   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:55.108879   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:55.155283   73900 cri.go:89] found id: ""
	I0930 21:10:55.155316   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.155331   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:55.155338   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:55.155398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:55.189370   73900 cri.go:89] found id: ""
	I0930 21:10:55.189399   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.189408   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:55.189416   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:55.189432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:55.243067   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:55.243101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:55.257021   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:55.257051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:55.329381   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:55.329408   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:55.329423   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:55.405691   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:55.405762   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:55.069901   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.568914   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:55.468489   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.977733   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:56.806381   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:58.806880   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.957380   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:57.971160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:57.971245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:58.004401   73900 cri.go:89] found id: ""
	I0930 21:10:58.004446   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.004457   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:58.004465   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:58.004524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:58.038954   73900 cri.go:89] found id: ""
	I0930 21:10:58.038978   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.038986   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:58.038991   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:58.039036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:58.072801   73900 cri.go:89] found id: ""
	I0930 21:10:58.072830   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.072842   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:58.072849   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:58.072909   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:58.104908   73900 cri.go:89] found id: ""
	I0930 21:10:58.104936   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.104946   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:58.104953   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:58.105014   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:58.139693   73900 cri.go:89] found id: ""
	I0930 21:10:58.139725   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.139735   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:58.139741   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:58.139795   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:58.174149   73900 cri.go:89] found id: ""
	I0930 21:10:58.174180   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.174192   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:58.174199   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:58.174275   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:58.206067   73900 cri.go:89] found id: ""
	I0930 21:10:58.206094   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.206105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:58.206112   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:58.206167   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:58.240613   73900 cri.go:89] found id: ""
	I0930 21:10:58.240645   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.240653   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:58.240661   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:58.240674   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:58.306061   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:58.306086   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:58.306100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:58.386030   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:58.386073   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:58.425526   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:58.425562   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:58.483364   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:58.483409   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:00.998086   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:01.011934   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:01.012015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:01.047923   73900 cri.go:89] found id: ""
	I0930 21:11:01.047951   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.047960   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:01.047966   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:01.048024   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:01.082126   73900 cri.go:89] found id: ""
	I0930 21:11:01.082159   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.082170   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:01.082176   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:01.082224   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:01.117746   73900 cri.go:89] found id: ""
	I0930 21:11:01.117775   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.117787   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:01.117794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:01.117853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:01.153034   73900 cri.go:89] found id: ""
	I0930 21:11:01.153059   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.153067   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:01.153072   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:01.153128   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:01.188102   73900 cri.go:89] found id: ""
	I0930 21:11:01.188125   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.188133   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:01.188139   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:01.188193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:01.222120   73900 cri.go:89] found id: ""
	I0930 21:11:01.222147   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.222155   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:01.222161   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:01.222215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:01.258899   73900 cri.go:89] found id: ""
	I0930 21:11:01.258929   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.258941   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:01.258949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:01.259008   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:01.295473   73900 cri.go:89] found id: ""
	I0930 21:11:01.295504   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.295512   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:01.295521   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:01.295551   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:01.349134   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:01.349181   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:01.363113   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:01.363147   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:01.436589   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:01.436609   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:01.436622   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:01.516384   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:01.516420   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:00.069406   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:02.568203   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:00.468104   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:02.968911   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:00.807318   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:03.307184   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:04.075114   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:04.089300   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:04.089375   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:04.124385   73900 cri.go:89] found id: ""
	I0930 21:11:04.124411   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.124419   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:04.124425   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:04.124491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:04.158326   73900 cri.go:89] found id: ""
	I0930 21:11:04.158359   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.158367   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:04.158372   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:04.158419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:04.193477   73900 cri.go:89] found id: ""
	I0930 21:11:04.193507   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.193516   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:04.193521   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:04.193577   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:04.231697   73900 cri.go:89] found id: ""
	I0930 21:11:04.231723   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.231731   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:04.231737   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:04.231805   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:04.265879   73900 cri.go:89] found id: ""
	I0930 21:11:04.265903   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.265910   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:04.265915   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:04.265960   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:04.301382   73900 cri.go:89] found id: ""
	I0930 21:11:04.301421   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.301432   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:04.301440   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:04.301505   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:04.337496   73900 cri.go:89] found id: ""
	I0930 21:11:04.337521   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.337529   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:04.337534   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:04.337584   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:04.372631   73900 cri.go:89] found id: ""
	I0930 21:11:04.372665   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.372677   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:04.372700   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:04.372715   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:04.385279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:04.385311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:04.456700   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:04.456721   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:04.456732   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:04.537892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:04.537933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.574919   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:04.574947   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.128733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:07.142625   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:07.142687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:07.177450   73900 cri.go:89] found id: ""
	I0930 21:11:07.177475   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.177483   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:07.177488   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:07.177536   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:07.210158   73900 cri.go:89] found id: ""
	I0930 21:11:07.210184   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.210192   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:07.210197   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:07.210256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:07.242623   73900 cri.go:89] found id: ""
	I0930 21:11:07.242648   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.242656   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:07.242661   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:07.242705   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:07.277779   73900 cri.go:89] found id: ""
	I0930 21:11:07.277810   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.277821   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:07.277827   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:07.277881   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:07.316232   73900 cri.go:89] found id: ""
	I0930 21:11:07.316257   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.316263   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:07.316269   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:07.316326   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:07.360277   73900 cri.go:89] found id: ""
	I0930 21:11:07.360311   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.360322   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:07.360329   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:07.360391   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:07.412146   73900 cri.go:89] found id: ""
	I0930 21:11:07.412171   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.412181   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:07.412187   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:07.412247   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:07.447179   73900 cri.go:89] found id: ""
	I0930 21:11:07.447209   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.447217   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:07.447225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:07.447235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.496304   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:07.496340   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:07.510332   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:07.510373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:07.581335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:07.581375   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:07.581393   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:07.664522   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:07.664558   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.568787   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.069201   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:09.070583   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:05.468251   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.970913   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:05.308084   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.807712   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.201145   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:10.213605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:10.213663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:10.247875   73900 cri.go:89] found id: ""
	I0930 21:11:10.247904   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.247913   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:10.247918   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:10.247966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:10.280855   73900 cri.go:89] found id: ""
	I0930 21:11:10.280889   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.280900   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:10.280907   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:10.280967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:10.315638   73900 cri.go:89] found id: ""
	I0930 21:11:10.315661   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.315669   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:10.315675   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:10.315722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:10.357059   73900 cri.go:89] found id: ""
	I0930 21:11:10.357086   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.357094   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:10.357100   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:10.357154   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:10.389969   73900 cri.go:89] found id: ""
	I0930 21:11:10.389997   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.390004   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:10.390009   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:10.390060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:10.424424   73900 cri.go:89] found id: ""
	I0930 21:11:10.424454   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.424463   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:10.424469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:10.424533   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:10.457608   73900 cri.go:89] found id: ""
	I0930 21:11:10.457638   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.457650   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:10.457657   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:10.457712   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:10.490215   73900 cri.go:89] found id: ""
	I0930 21:11:10.490244   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.490253   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:10.490263   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:10.490278   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:10.554787   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:10.554814   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:10.554829   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:10.632428   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:10.632464   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:10.671018   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:10.671054   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:10.721187   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:10.721228   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:11.568643   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:13.568765   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.469296   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:12.968274   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.307487   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:12.307960   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:14.808087   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:13.234687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:13.250680   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:13.250778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:13.312468   73900 cri.go:89] found id: ""
	I0930 21:11:13.312499   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.312509   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:13.312516   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:13.312578   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:13.367051   73900 cri.go:89] found id: ""
	I0930 21:11:13.367073   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.367084   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:13.367091   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:13.367149   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:13.403019   73900 cri.go:89] found id: ""
	I0930 21:11:13.403055   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.403066   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:13.403074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:13.403135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:13.436942   73900 cri.go:89] found id: ""
	I0930 21:11:13.436967   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.436975   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:13.436981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:13.437047   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:13.470491   73900 cri.go:89] found id: ""
	I0930 21:11:13.470515   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.470523   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:13.470528   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:13.470619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:13.504078   73900 cri.go:89] found id: ""
	I0930 21:11:13.504112   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.504121   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:13.504127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:13.504201   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:13.536245   73900 cri.go:89] found id: ""
	I0930 21:11:13.536271   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.536292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:13.536297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:13.536357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:13.570794   73900 cri.go:89] found id: ""
	I0930 21:11:13.570817   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.570827   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:13.570836   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:13.570850   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:13.647919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:13.647941   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:13.647956   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:13.726113   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:13.726150   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:13.767916   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:13.767942   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:13.826362   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:13.826402   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.341252   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:16.354259   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:16.354344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:16.388627   73900 cri.go:89] found id: ""
	I0930 21:11:16.388650   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.388658   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:16.388663   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:16.388714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:16.424848   73900 cri.go:89] found id: ""
	I0930 21:11:16.424871   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.424878   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:16.424883   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:16.424941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:16.460604   73900 cri.go:89] found id: ""
	I0930 21:11:16.460626   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.460635   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:16.460640   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:16.460688   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:16.495908   73900 cri.go:89] found id: ""
	I0930 21:11:16.495932   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.495940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:16.495946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:16.496000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:16.531758   73900 cri.go:89] found id: ""
	I0930 21:11:16.531782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.531790   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:16.531796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:16.531853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:16.566756   73900 cri.go:89] found id: ""
	I0930 21:11:16.566782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.566792   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:16.566799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:16.566864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:16.601978   73900 cri.go:89] found id: ""
	I0930 21:11:16.602005   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.602012   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:16.602022   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:16.602081   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:16.636009   73900 cri.go:89] found id: ""
	I0930 21:11:16.636044   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.636056   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:16.636066   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:16.636079   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:16.688750   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:16.688786   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.702364   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:16.702404   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:16.767119   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:16.767175   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:16.767188   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:16.842052   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:16.842095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:15.571440   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:18.068441   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:15.469030   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:17.970779   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:17.307424   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:19.807193   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:19.380570   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:19.394687   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:19.394816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:19.427087   73900 cri.go:89] found id: ""
	I0930 21:11:19.427116   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.427124   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:19.427129   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:19.427178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:19.461074   73900 cri.go:89] found id: ""
	I0930 21:11:19.461098   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.461108   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:19.461122   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:19.461183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:19.494850   73900 cri.go:89] found id: ""
	I0930 21:11:19.494872   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.494880   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:19.494885   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:19.494943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:19.533448   73900 cri.go:89] found id: ""
	I0930 21:11:19.533480   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.533493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:19.533500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:19.533562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:19.569250   73900 cri.go:89] found id: ""
	I0930 21:11:19.569280   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.569291   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:19.569298   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:19.569383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:19.603182   73900 cri.go:89] found id: ""
	I0930 21:11:19.603206   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.603213   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:19.603219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:19.603268   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:19.637411   73900 cri.go:89] found id: ""
	I0930 21:11:19.637433   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.637441   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:19.637447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:19.637500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:19.672789   73900 cri.go:89] found id: ""
	I0930 21:11:19.672821   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.672831   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:19.672841   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:19.672854   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:19.755002   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:19.755039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:19.796499   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:19.796536   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:19.847235   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:19.847272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:19.861007   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:19.861032   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:19.931214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.431506   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:22.446129   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:22.446199   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:22.484093   73900 cri.go:89] found id: ""
	I0930 21:11:22.484119   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.484126   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:22.484132   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:22.484183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:22.516949   73900 cri.go:89] found id: ""
	I0930 21:11:22.516986   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.516994   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:22.517001   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:22.517056   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:22.550848   73900 cri.go:89] found id: ""
	I0930 21:11:22.550883   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.550898   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:22.550906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:22.550966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:22.586459   73900 cri.go:89] found id: ""
	I0930 21:11:22.586490   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.586498   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:22.586505   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:22.586627   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:22.620538   73900 cri.go:89] found id: ""
	I0930 21:11:22.620566   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.620578   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:22.620586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:22.620651   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:22.658256   73900 cri.go:89] found id: ""
	I0930 21:11:22.658279   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.658287   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:22.658292   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:22.658352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:22.690316   73900 cri.go:89] found id: ""
	I0930 21:11:22.690349   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.690365   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:22.690371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:22.690431   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:22.724234   73900 cri.go:89] found id: ""
	I0930 21:11:22.724264   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.724275   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:22.724285   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:22.724299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:20.570198   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:23.072974   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:20.468122   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.968686   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.307398   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:24.806972   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.777460   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:22.777503   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:22.790850   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:22.790879   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:22.866058   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.866079   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:22.866095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:22.947447   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:22.947488   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:25.486733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:25.499906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:25.499976   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:25.533819   73900 cri.go:89] found id: ""
	I0930 21:11:25.533842   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.533850   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:25.533857   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:25.533906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:25.568037   73900 cri.go:89] found id: ""
	I0930 21:11:25.568059   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.568066   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:25.568071   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:25.568129   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:25.601784   73900 cri.go:89] found id: ""
	I0930 21:11:25.601811   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.601819   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:25.601824   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:25.601876   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:25.638048   73900 cri.go:89] found id: ""
	I0930 21:11:25.638070   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.638078   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:25.638084   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:25.638140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:25.669946   73900 cri.go:89] found id: ""
	I0930 21:11:25.669968   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.669976   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:25.669981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:25.670028   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:25.701928   73900 cri.go:89] found id: ""
	I0930 21:11:25.701953   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.701961   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:25.701967   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:25.702025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:25.744295   73900 cri.go:89] found id: ""
	I0930 21:11:25.744327   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.744335   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:25.744341   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:25.744398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:25.780175   73900 cri.go:89] found id: ""
	I0930 21:11:25.780205   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.780213   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:25.780221   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:25.780232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:25.828774   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:25.828812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:25.842624   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:25.842649   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:25.916408   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:25.916451   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:25.916469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:25.997896   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:25.997932   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:25.570148   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:28.068628   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:25.467356   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:27.467782   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:29.467936   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:27.306939   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:29.807156   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:28.540994   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:28.553841   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:28.553904   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:28.588718   73900 cri.go:89] found id: ""
	I0930 21:11:28.588745   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.588754   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:28.588763   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:28.588809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:28.636210   73900 cri.go:89] found id: ""
	I0930 21:11:28.636237   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.636245   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:28.636250   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:28.636312   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:28.668714   73900 cri.go:89] found id: ""
	I0930 21:11:28.668743   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.668751   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:28.668757   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:28.668804   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:28.700413   73900 cri.go:89] found id: ""
	I0930 21:11:28.700449   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.700462   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:28.700469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:28.700522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:28.733409   73900 cri.go:89] found id: ""
	I0930 21:11:28.733433   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.733441   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:28.733446   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:28.733494   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:28.766917   73900 cri.go:89] found id: ""
	I0930 21:11:28.766957   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.766970   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:28.766979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:28.767046   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:28.801759   73900 cri.go:89] found id: ""
	I0930 21:11:28.801788   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.801798   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:28.801805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:28.801851   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:28.840724   73900 cri.go:89] found id: ""
	I0930 21:11:28.840761   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.840770   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:28.840790   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:28.840805   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:28.854426   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:28.854465   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:28.926650   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:28.926675   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:28.926690   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:29.005513   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:29.005569   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:29.047077   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:29.047102   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.603193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:31.615563   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:31.615631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:31.647656   73900 cri.go:89] found id: ""
	I0930 21:11:31.647685   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.647693   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:31.647699   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:31.647748   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:31.680004   73900 cri.go:89] found id: ""
	I0930 21:11:31.680037   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.680048   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:31.680056   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:31.680120   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:31.712562   73900 cri.go:89] found id: ""
	I0930 21:11:31.712588   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.712596   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:31.712602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:31.712650   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:31.747692   73900 cri.go:89] found id: ""
	I0930 21:11:31.747724   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.747732   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:31.747738   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:31.747803   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:31.781441   73900 cri.go:89] found id: ""
	I0930 21:11:31.781464   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.781472   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:31.781478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:31.781532   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:31.822227   73900 cri.go:89] found id: ""
	I0930 21:11:31.822252   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.822259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:31.822265   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:31.822322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:31.856531   73900 cri.go:89] found id: ""
	I0930 21:11:31.856555   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.856563   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:31.856568   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:31.856631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:31.894562   73900 cri.go:89] found id: ""
	I0930 21:11:31.894585   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.894593   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:31.894602   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:31.894618   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.946233   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:31.946271   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:31.960713   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:31.960744   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:32.036479   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:32.036497   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:32.036509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:32.111442   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:32.111477   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:30.068975   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:32.069794   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:31.468374   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:33.468986   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:31.809169   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:34.307372   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:34.651545   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:34.664058   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:34.664121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:34.697506   73900 cri.go:89] found id: ""
	I0930 21:11:34.697530   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.697539   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:34.697545   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:34.697599   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:34.730297   73900 cri.go:89] found id: ""
	I0930 21:11:34.730326   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.730334   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:34.730339   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:34.730390   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:34.762251   73900 cri.go:89] found id: ""
	I0930 21:11:34.762278   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.762286   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:34.762291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:34.762358   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:34.803028   73900 cri.go:89] found id: ""
	I0930 21:11:34.803058   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.803068   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:34.803074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:34.803122   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:34.840063   73900 cri.go:89] found id: ""
	I0930 21:11:34.840097   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.840110   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:34.840118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:34.840192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:34.878641   73900 cri.go:89] found id: ""
	I0930 21:11:34.878675   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.878686   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:34.878693   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:34.878745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:34.910799   73900 cri.go:89] found id: ""
	I0930 21:11:34.910823   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.910830   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:34.910837   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:34.910899   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:34.947748   73900 cri.go:89] found id: ""
	I0930 21:11:34.947782   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.947795   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:34.947806   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:34.947821   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:35.026490   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:35.026514   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:35.026529   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:35.115504   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:35.115559   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:35.158629   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:35.158659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:35.211011   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:35.211052   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:37.726260   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:37.739137   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:37.739222   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:34.568166   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:36.569720   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:39.069371   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:35.968574   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:38.467872   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:36.807057   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:38.807376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:37.779980   73900 cri.go:89] found id: ""
	I0930 21:11:37.780009   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.780018   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:37.780024   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:37.780076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:37.813936   73900 cri.go:89] found id: ""
	I0930 21:11:37.813961   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.813969   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:37.813975   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:37.814021   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:37.851150   73900 cri.go:89] found id: ""
	I0930 21:11:37.851176   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.851186   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:37.851193   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:37.851256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:37.891855   73900 cri.go:89] found id: ""
	I0930 21:11:37.891881   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.891889   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:37.891894   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:37.891943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:37.929234   73900 cri.go:89] found id: ""
	I0930 21:11:37.929269   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.929281   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:37.929288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:37.929359   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:37.962350   73900 cri.go:89] found id: ""
	I0930 21:11:37.962378   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.962386   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:37.962391   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:37.962441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:37.996727   73900 cri.go:89] found id: ""
	I0930 21:11:37.996752   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.996760   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:37.996765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:37.996819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:38.029959   73900 cri.go:89] found id: ""
	I0930 21:11:38.029991   73900 logs.go:276] 0 containers: []
	W0930 21:11:38.029999   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:38.030008   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:38.030019   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:38.079836   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:38.079875   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:38.093208   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:38.093236   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:38.168839   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:38.168862   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:38.168873   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:38.244747   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:38.244783   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:40.788841   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:40.802419   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:40.802491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:40.837138   73900 cri.go:89] found id: ""
	I0930 21:11:40.837175   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.837186   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:40.837193   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:40.837255   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:40.870947   73900 cri.go:89] found id: ""
	I0930 21:11:40.870977   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.870987   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:40.870993   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:40.871040   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:40.905004   73900 cri.go:89] found id: ""
	I0930 21:11:40.905033   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.905046   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:40.905053   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:40.905104   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:40.936909   73900 cri.go:89] found id: ""
	I0930 21:11:40.936937   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.936945   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:40.936952   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:40.937015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:40.972601   73900 cri.go:89] found id: ""
	I0930 21:11:40.972630   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.972641   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:40.972646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:40.972704   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:41.007539   73900 cri.go:89] found id: ""
	I0930 21:11:41.007583   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.007594   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:41.007602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:41.007661   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:41.042049   73900 cri.go:89] found id: ""
	I0930 21:11:41.042075   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.042084   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:41.042091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:41.042153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:41.075313   73900 cri.go:89] found id: ""
	I0930 21:11:41.075398   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.075414   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:41.075424   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:41.075440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:41.128683   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:41.128726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:41.142533   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:41.142560   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:41.210149   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:41.210176   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:41.210191   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:41.286547   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:41.286590   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:41.070042   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.570819   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:40.969912   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.468434   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:40.808294   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.307628   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.828902   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:43.842047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:43.842127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:43.876147   73900 cri.go:89] found id: ""
	I0930 21:11:43.876177   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.876187   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:43.876194   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:43.876287   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:43.916351   73900 cri.go:89] found id: ""
	I0930 21:11:43.916383   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.916394   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:43.916404   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:43.916457   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:43.948853   73900 cri.go:89] found id: ""
	I0930 21:11:43.948883   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.948894   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:43.948900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:43.948967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:43.983525   73900 cri.go:89] found id: ""
	I0930 21:11:43.983577   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.983589   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:43.983597   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:43.983656   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:44.021560   73900 cri.go:89] found id: ""
	I0930 21:11:44.021594   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.021606   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:44.021614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:44.021684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:44.057307   73900 cri.go:89] found id: ""
	I0930 21:11:44.057342   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.057353   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:44.057361   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:44.057418   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:44.091120   73900 cri.go:89] found id: ""
	I0930 21:11:44.091145   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.091155   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:44.091162   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:44.091223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:44.125781   73900 cri.go:89] found id: ""
	I0930 21:11:44.125808   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.125817   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:44.125827   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:44.125842   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:44.138699   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:44.138726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:44.208976   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:44.209009   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:44.209026   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:44.285552   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:44.285593   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:44.323412   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:44.323449   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:46.875210   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:46.888532   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:46.888596   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:46.921260   73900 cri.go:89] found id: ""
	I0930 21:11:46.921285   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.921293   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:46.921299   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:46.921357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:46.954645   73900 cri.go:89] found id: ""
	I0930 21:11:46.954675   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.954683   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:46.954688   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:46.954749   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:46.988424   73900 cri.go:89] found id: ""
	I0930 21:11:46.988457   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.988468   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:46.988475   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:46.988535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:47.022635   73900 cri.go:89] found id: ""
	I0930 21:11:47.022664   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.022675   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:47.022682   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:47.022744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:47.056497   73900 cri.go:89] found id: ""
	I0930 21:11:47.056523   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.056530   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:47.056536   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:47.056595   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:47.094983   73900 cri.go:89] found id: ""
	I0930 21:11:47.095011   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.095021   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:47.095028   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:47.095097   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:47.147567   73900 cri.go:89] found id: ""
	I0930 21:11:47.147595   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.147606   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:47.147613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:47.147692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:47.184878   73900 cri.go:89] found id: ""
	I0930 21:11:47.184908   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.184919   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:47.184930   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:47.184943   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:47.258581   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:47.258615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:47.303068   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:47.303100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:47.358749   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:47.358789   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:47.372492   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:47.372531   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:47.443984   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:46.069421   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:48.569013   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:45.968422   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:47.968876   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:45.808341   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:48.306627   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:49.944644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:49.958045   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:49.958124   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:49.993053   73900 cri.go:89] found id: ""
	I0930 21:11:49.993088   73900 logs.go:276] 0 containers: []
	W0930 21:11:49.993100   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:49.993107   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:49.993168   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:50.026171   73900 cri.go:89] found id: ""
	I0930 21:11:50.026197   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.026205   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:50.026210   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:50.026269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:50.060462   73900 cri.go:89] found id: ""
	I0930 21:11:50.060492   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.060502   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:50.060509   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:50.060567   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:50.095385   73900 cri.go:89] found id: ""
	I0930 21:11:50.095414   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.095425   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:50.095432   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:50.095507   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:50.127275   73900 cri.go:89] found id: ""
	I0930 21:11:50.127300   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.127308   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:50.127318   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:50.127378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:50.159810   73900 cri.go:89] found id: ""
	I0930 21:11:50.159836   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.159845   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:50.159850   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:50.159906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:50.191651   73900 cri.go:89] found id: ""
	I0930 21:11:50.191684   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.191695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:50.191702   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:50.191774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:50.225772   73900 cri.go:89] found id: ""
	I0930 21:11:50.225799   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.225809   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:50.225819   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:50.225837   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:50.310189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:50.310223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:50.348934   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:50.348965   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:50.400666   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:50.400703   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:50.415810   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:50.415843   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:50.483773   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:51.069928   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:53.070065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:50.469516   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.968367   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:54.968624   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:50.307903   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.807610   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.984701   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:52.997669   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:52.997745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:53.034012   73900 cri.go:89] found id: ""
	I0930 21:11:53.034044   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.034055   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:53.034063   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:53.034121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:53.068192   73900 cri.go:89] found id: ""
	I0930 21:11:53.068215   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.068222   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:53.068228   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:53.068285   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:53.104683   73900 cri.go:89] found id: ""
	I0930 21:11:53.104710   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.104719   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:53.104724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:53.104778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:53.138713   73900 cri.go:89] found id: ""
	I0930 21:11:53.138745   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.138753   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:53.138759   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:53.138814   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:53.173955   73900 cri.go:89] found id: ""
	I0930 21:11:53.173982   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.173994   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:53.174001   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:53.174060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:53.205942   73900 cri.go:89] found id: ""
	I0930 21:11:53.205970   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.205980   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:53.205987   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:53.206052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:53.241739   73900 cri.go:89] found id: ""
	I0930 21:11:53.241767   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.241776   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:53.241782   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:53.241832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:53.275328   73900 cri.go:89] found id: ""
	I0930 21:11:53.275363   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.275372   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:53.275381   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:53.275397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:53.313732   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:53.313761   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:53.364974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:53.365011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:53.377970   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:53.377999   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:53.445341   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:53.445370   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:53.445388   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.025958   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:56.038367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:56.038434   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:56.074721   73900 cri.go:89] found id: ""
	I0930 21:11:56.074756   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.074767   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:56.074781   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:56.074846   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:56.111491   73900 cri.go:89] found id: ""
	I0930 21:11:56.111525   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.111550   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:56.111572   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:56.111626   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:56.145660   73900 cri.go:89] found id: ""
	I0930 21:11:56.145690   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.145701   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:56.145708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:56.145769   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:56.180865   73900 cri.go:89] found id: ""
	I0930 21:11:56.180891   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.180901   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:56.180908   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:56.180971   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:56.213681   73900 cri.go:89] found id: ""
	I0930 21:11:56.213707   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.213716   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:56.213721   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:56.213772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:56.246683   73900 cri.go:89] found id: ""
	I0930 21:11:56.246711   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.246719   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:56.246724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:56.246774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:56.279651   73900 cri.go:89] found id: ""
	I0930 21:11:56.279679   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.279687   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:56.279692   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:56.279746   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:56.316701   73900 cri.go:89] found id: ""
	I0930 21:11:56.316727   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.316735   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:56.316743   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:56.316753   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:56.329879   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:56.329905   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:56.399919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:56.399949   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:56.399964   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.480200   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:56.480237   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:56.517755   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:56.517782   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:55.568782   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:58.068718   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:57.468492   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.968123   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:55.307809   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:57.308095   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.807355   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.070677   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:59.085884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:59.085956   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:59.119580   73900 cri.go:89] found id: ""
	I0930 21:11:59.119606   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.119615   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:59.119621   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:59.119667   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:59.152087   73900 cri.go:89] found id: ""
	I0930 21:11:59.152111   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.152120   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:59.152127   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:59.152172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:59.186177   73900 cri.go:89] found id: ""
	I0930 21:11:59.186205   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.186213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:59.186220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:59.186276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:59.218800   73900 cri.go:89] found id: ""
	I0930 21:11:59.218821   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.218829   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:59.218835   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:59.218893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:59.254335   73900 cri.go:89] found id: ""
	I0930 21:11:59.254361   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.254372   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:59.254378   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:59.254432   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:59.292406   73900 cri.go:89] found id: ""
	I0930 21:11:59.292441   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.292453   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:59.292460   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:59.292522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:59.333352   73900 cri.go:89] found id: ""
	I0930 21:11:59.333388   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.333399   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:59.333406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:59.333481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:59.377031   73900 cri.go:89] found id: ""
	I0930 21:11:59.377056   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.377064   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:59.377072   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:59.377084   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:59.392626   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:59.392655   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:59.473714   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:59.473741   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:59.473754   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:59.548895   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:59.548931   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:59.589007   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:59.589039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.139243   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:02.152335   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:02.152415   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:02.186942   73900 cri.go:89] found id: ""
	I0930 21:12:02.186980   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.186991   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:02.186999   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:02.187061   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:02.219738   73900 cri.go:89] found id: ""
	I0930 21:12:02.219759   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.219768   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:02.219773   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:02.219820   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:02.253667   73900 cri.go:89] found id: ""
	I0930 21:12:02.253698   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.253707   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:02.253712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:02.253760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:02.290078   73900 cri.go:89] found id: ""
	I0930 21:12:02.290105   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.290115   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:02.290122   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:02.290182   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:02.326408   73900 cri.go:89] found id: ""
	I0930 21:12:02.326436   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.326448   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:02.326455   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:02.326509   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:02.360608   73900 cri.go:89] found id: ""
	I0930 21:12:02.360641   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.360649   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:02.360655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:02.360714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:02.396140   73900 cri.go:89] found id: ""
	I0930 21:12:02.396166   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.396176   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:02.396182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:02.396236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:02.429905   73900 cri.go:89] found id: ""
	I0930 21:12:02.429947   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.429958   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:02.429968   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:02.429986   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:02.506600   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:02.506645   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:02.549325   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:02.549354   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.603614   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:02.603659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:02.618832   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:02.618859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:02.692491   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:00.070569   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:02.569436   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:01.968240   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:04.468583   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:02.306973   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:04.308182   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:05.193131   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:05.206133   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:05.206192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:05.238403   73900 cri.go:89] found id: ""
	I0930 21:12:05.238431   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.238439   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:05.238447   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:05.238523   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:05.271261   73900 cri.go:89] found id: ""
	I0930 21:12:05.271290   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.271303   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:05.271310   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:05.271378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:05.307718   73900 cri.go:89] found id: ""
	I0930 21:12:05.307749   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.307760   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:05.307767   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:05.307832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:05.341336   73900 cri.go:89] found id: ""
	I0930 21:12:05.341379   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.341390   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:05.341398   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:05.341461   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:05.374998   73900 cri.go:89] found id: ""
	I0930 21:12:05.375024   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.375032   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:05.375037   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:05.375085   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:05.410133   73900 cri.go:89] found id: ""
	I0930 21:12:05.410163   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.410174   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:05.410182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:05.410248   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:05.446197   73900 cri.go:89] found id: ""
	I0930 21:12:05.446227   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.446238   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:05.446246   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:05.446305   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:05.480638   73900 cri.go:89] found id: ""
	I0930 21:12:05.480667   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.480683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:05.480691   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:05.480702   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:05.532473   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:05.532512   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:05.547068   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:05.547096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:05.621444   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:05.621472   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:05.621487   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:05.707712   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:05.707767   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:05.068363   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:07.069531   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:06.969695   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:06.969727   73375 pod_ready.go:82] duration metric: took 4m0.008001407s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:06.969736   73375 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:06.969743   73375 pod_ready.go:39] duration metric: took 4m4.053054405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:06.969757   73375 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:12:06.969781   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:06.969835   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:07.024708   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:07.024730   73375 cri.go:89] found id: ""
	I0930 21:12:07.024737   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:07.024805   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.029375   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:07.029439   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:07.063656   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:07.063684   73375 cri.go:89] found id: ""
	I0930 21:12:07.063695   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:07.063754   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.068071   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:07.068126   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:07.102636   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:07.102665   73375 cri.go:89] found id: ""
	I0930 21:12:07.102675   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:07.102733   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.106711   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:07.106791   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:07.142676   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:07.142698   73375 cri.go:89] found id: ""
	I0930 21:12:07.142708   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:07.142766   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.146979   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:07.147041   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:07.189192   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:07.189223   73375 cri.go:89] found id: ""
	I0930 21:12:07.189232   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:07.189283   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.193408   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:07.193484   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:07.230538   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:07.230562   73375 cri.go:89] found id: ""
	I0930 21:12:07.230571   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:07.230630   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.235482   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:07.235573   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:07.274180   73375 cri.go:89] found id: ""
	I0930 21:12:07.274215   73375 logs.go:276] 0 containers: []
	W0930 21:12:07.274226   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:07.274233   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:07.274312   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:07.312851   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:07.312876   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:07.312882   73375 cri.go:89] found id: ""
	I0930 21:12:07.312890   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:07.312947   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.317386   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.321912   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:07.321940   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:07.361674   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:07.361701   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:07.398555   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:07.398615   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:07.432511   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:07.432540   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:07.919639   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:07.919678   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:07.935038   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:07.935067   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:08.059404   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:08.059435   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:08.114569   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:08.114605   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:08.153409   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:08.153447   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.193155   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:08.193187   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:08.260774   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:08.260814   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:08.351488   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:08.351519   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:08.387971   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:08.388012   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:06.805971   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:08.807886   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:08.248038   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:08.261409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:08.261485   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:08.305564   73900 cri.go:89] found id: ""
	I0930 21:12:08.305591   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.305601   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:08.305610   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:08.305669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:08.347816   73900 cri.go:89] found id: ""
	I0930 21:12:08.347844   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.347852   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:08.347858   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:08.347927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:08.381662   73900 cri.go:89] found id: ""
	I0930 21:12:08.381695   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.381705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:08.381712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:08.381829   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:08.427366   73900 cri.go:89] found id: ""
	I0930 21:12:08.427396   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.427406   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:08.427413   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:08.427476   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:08.463419   73900 cri.go:89] found id: ""
	I0930 21:12:08.463443   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.463451   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:08.463457   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:08.463508   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:08.496999   73900 cri.go:89] found id: ""
	I0930 21:12:08.497023   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.497033   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:08.497040   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:08.497098   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:08.530410   73900 cri.go:89] found id: ""
	I0930 21:12:08.530434   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.530442   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:08.530447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:08.530495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:08.563191   73900 cri.go:89] found id: ""
	I0930 21:12:08.563224   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.563235   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:08.563244   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:08.563258   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:08.640305   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:08.640341   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.676404   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:08.676431   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:08.729676   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:08.729736   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:08.743282   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:08.743310   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:08.811334   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.311643   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:11.329153   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:11.329229   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:11.369804   73900 cri.go:89] found id: ""
	I0930 21:12:11.369829   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.369838   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:11.369843   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:11.369896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:11.408530   73900 cri.go:89] found id: ""
	I0930 21:12:11.408558   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.408569   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:11.408580   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:11.408663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.446123   73900 cri.go:89] found id: ""
	I0930 21:12:11.446147   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.446155   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:11.446160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:11.446206   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:11.484019   73900 cri.go:89] found id: ""
	I0930 21:12:11.484044   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.484052   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:11.484057   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:11.484118   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:11.521934   73900 cri.go:89] found id: ""
	I0930 21:12:11.521961   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.521971   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:11.521979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:11.522042   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:11.561253   73900 cri.go:89] found id: ""
	I0930 21:12:11.561283   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.561293   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:11.561299   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:11.561352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:11.602610   73900 cri.go:89] found id: ""
	I0930 21:12:11.602637   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.602648   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:11.602655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:11.602760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:11.637146   73900 cri.go:89] found id: ""
	I0930 21:12:11.637174   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.637185   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:11.637194   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:11.637208   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:11.707627   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.707651   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:11.707668   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:11.786047   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:11.786091   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:11.827128   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:11.827157   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:11.885504   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:11.885542   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:09.569584   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:11.570031   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:14.068184   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:10.950921   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:10.967834   73375 api_server.go:72] duration metric: took 4m15.348038807s to wait for apiserver process to appear ...
	I0930 21:12:10.967876   73375 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:12:10.967922   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:10.967990   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:11.006632   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:11.006667   73375 cri.go:89] found id: ""
	I0930 21:12:11.006677   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:11.006738   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.010931   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:11.010994   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:11.045855   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:11.045882   73375 cri.go:89] found id: ""
	I0930 21:12:11.045893   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:11.045953   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.050058   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:11.050134   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.090954   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:11.090980   73375 cri.go:89] found id: ""
	I0930 21:12:11.090990   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:11.091041   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.095073   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:11.095150   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:11.137413   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:11.137448   73375 cri.go:89] found id: ""
	I0930 21:12:11.137458   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:11.137516   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.141559   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:11.141638   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:11.176921   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:11.176952   73375 cri.go:89] found id: ""
	I0930 21:12:11.176961   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:11.177010   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.181095   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:11.181158   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:11.215117   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:11.215141   73375 cri.go:89] found id: ""
	I0930 21:12:11.215148   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:11.215195   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.218947   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:11.219003   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:11.253901   73375 cri.go:89] found id: ""
	I0930 21:12:11.253937   73375 logs.go:276] 0 containers: []
	W0930 21:12:11.253948   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:11.253955   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:11.254010   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:11.293408   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:11.293434   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:11.293440   73375 cri.go:89] found id: ""
	I0930 21:12:11.293448   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:11.293562   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.297829   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.302572   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:11.302596   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:11.378000   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:11.378037   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:11.415382   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:11.415414   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:11.453703   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:11.453729   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:11.517749   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:11.517780   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:11.556543   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:11.556576   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:12.023270   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:12.023310   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:12.071138   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:12.071170   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:12.086915   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:12.086944   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:12.200046   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:12.200077   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:12.241447   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:12.241475   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:12.296574   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:12.296607   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:12.341982   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:12.342009   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:14.877590   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:12:14.882913   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0930 21:12:14.884088   73375 api_server.go:141] control plane version: v1.31.1
	I0930 21:12:14.884106   73375 api_server.go:131] duration metric: took 3.916223308s to wait for apiserver health ...
	I0930 21:12:14.884113   73375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:12:14.884134   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:14.884185   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:14.926932   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:14.926952   73375 cri.go:89] found id: ""
	I0930 21:12:14.926960   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:14.927003   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:14.931044   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:14.931106   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:14.967622   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:14.967645   73375 cri.go:89] found id: ""
	I0930 21:12:14.967652   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:14.967698   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:14.972152   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:14.972221   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.307501   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:13.307687   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:14.400848   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:14.413794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:14.413882   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:14.449799   73900 cri.go:89] found id: ""
	I0930 21:12:14.449830   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.449841   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:14.449849   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:14.449902   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:14.486301   73900 cri.go:89] found id: ""
	I0930 21:12:14.486330   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.486357   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:14.486365   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:14.486427   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:14.520451   73900 cri.go:89] found id: ""
	I0930 21:12:14.520479   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.520487   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:14.520497   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:14.520558   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:14.554056   73900 cri.go:89] found id: ""
	I0930 21:12:14.554095   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.554107   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:14.554114   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:14.554178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:14.594054   73900 cri.go:89] found id: ""
	I0930 21:12:14.594080   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.594088   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:14.594094   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:14.594142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:14.630225   73900 cri.go:89] found id: ""
	I0930 21:12:14.630255   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.630278   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:14.630284   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:14.630335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:14.663006   73900 cri.go:89] found id: ""
	I0930 21:12:14.663043   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.663054   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:14.663061   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:14.663119   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:14.699815   73900 cri.go:89] found id: ""
	I0930 21:12:14.699845   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.699858   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:14.699870   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:14.699886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:14.751465   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:14.751509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:14.766401   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:14.766432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:14.832979   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:14.833002   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:14.833016   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:14.918011   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:14.918051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.458886   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:17.471833   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:17.471918   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:17.505109   73900 cri.go:89] found id: ""
	I0930 21:12:17.505135   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.505145   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:17.505151   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:17.505213   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:17.538091   73900 cri.go:89] found id: ""
	I0930 21:12:17.538118   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.538129   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:17.538136   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:17.538308   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:17.571668   73900 cri.go:89] found id: ""
	I0930 21:12:17.571694   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.571705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:17.571712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:17.571770   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:17.607391   73900 cri.go:89] found id: ""
	I0930 21:12:17.607431   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.607442   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:17.607452   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:17.607519   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:17.643271   73900 cri.go:89] found id: ""
	I0930 21:12:17.643297   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.643305   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:17.643313   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:17.643382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:17.676653   73900 cri.go:89] found id: ""
	I0930 21:12:17.676687   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.676698   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:17.676708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:17.676772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:17.709570   73900 cri.go:89] found id: ""
	I0930 21:12:17.709602   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.709610   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:17.709615   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:17.709671   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:17.747857   73900 cri.go:89] found id: ""
	I0930 21:12:17.747883   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.747891   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:17.747902   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:17.747915   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:15.010874   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:15.010898   73375 cri.go:89] found id: ""
	I0930 21:12:15.010905   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:15.010947   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.015490   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:15.015582   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:15.051182   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:15.051210   73375 cri.go:89] found id: ""
	I0930 21:12:15.051220   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:15.051291   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.055057   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:15.055107   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:15.093126   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:15.093150   73375 cri.go:89] found id: ""
	I0930 21:12:15.093159   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:15.093214   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.097138   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:15.097200   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:15.131676   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:15.131704   73375 cri.go:89] found id: ""
	I0930 21:12:15.131716   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:15.131773   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.135550   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:15.135620   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:15.170579   73375 cri.go:89] found id: ""
	I0930 21:12:15.170604   73375 logs.go:276] 0 containers: []
	W0930 21:12:15.170612   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:15.170618   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:15.170672   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:15.205190   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:15.205216   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:15.205222   73375 cri.go:89] found id: ""
	I0930 21:12:15.205231   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:15.205287   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.209426   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.212981   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:15.213002   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:15.281543   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:15.281582   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:15.325855   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:15.325895   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:15.367382   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:15.367429   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:15.441395   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:15.441432   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:15.482487   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:15.482518   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:15.520298   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:15.520335   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:15.572596   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:15.572626   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:15.618087   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:15.618120   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:15.634125   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:15.634151   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:15.744355   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:15.744390   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:15.799312   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:15.799345   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:15.838934   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:15.838969   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:18.759947   73375 system_pods.go:59] 8 kube-system pods found
	I0930 21:12:18.759976   73375 system_pods.go:61] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running
	I0930 21:12:18.759981   73375 system_pods.go:61] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running
	I0930 21:12:18.759985   73375 system_pods.go:61] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running
	I0930 21:12:18.759989   73375 system_pods.go:61] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running
	I0930 21:12:18.759992   73375 system_pods.go:61] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:12:18.759995   73375 system_pods.go:61] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running
	I0930 21:12:18.760001   73375 system_pods.go:61] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:18.760006   73375 system_pods.go:61] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:12:18.760016   73375 system_pods.go:74] duration metric: took 3.875896906s to wait for pod list to return data ...
	I0930 21:12:18.760024   73375 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:12:18.762755   73375 default_sa.go:45] found service account: "default"
	I0930 21:12:18.762777   73375 default_sa.go:55] duration metric: took 2.746721ms for default service account to be created ...
	I0930 21:12:18.762787   73375 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:12:18.769060   73375 system_pods.go:86] 8 kube-system pods found
	I0930 21:12:18.769086   73375 system_pods.go:89] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running
	I0930 21:12:18.769091   73375 system_pods.go:89] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running
	I0930 21:12:18.769095   73375 system_pods.go:89] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running
	I0930 21:12:18.769099   73375 system_pods.go:89] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running
	I0930 21:12:18.769104   73375 system_pods.go:89] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:12:18.769107   73375 system_pods.go:89] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running
	I0930 21:12:18.769113   73375 system_pods.go:89] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:18.769129   73375 system_pods.go:89] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:12:18.769136   73375 system_pods.go:126] duration metric: took 6.344583ms to wait for k8s-apps to be running ...
	I0930 21:12:18.769144   73375 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:12:18.769183   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:18.785488   73375 system_svc.go:56] duration metric: took 16.335135ms WaitForService to wait for kubelet
	I0930 21:12:18.785544   73375 kubeadm.go:582] duration metric: took 4m23.165751441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:12:18.785572   73375 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:12:18.789308   73375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:12:18.789340   73375 node_conditions.go:123] node cpu capacity is 2
	I0930 21:12:18.789356   73375 node_conditions.go:105] duration metric: took 3.778609ms to run NodePressure ...
	I0930 21:12:18.789370   73375 start.go:241] waiting for startup goroutines ...
	I0930 21:12:18.789379   73375 start.go:246] waiting for cluster config update ...
	I0930 21:12:18.789394   73375 start.go:255] writing updated cluster config ...
	I0930 21:12:18.789688   73375 ssh_runner.go:195] Run: rm -f paused
	I0930 21:12:18.837384   73375 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:12:18.839699   73375 out.go:177] * Done! kubectl is now configured to use "no-preload-997816" cluster and "default" namespace by default
	I0930 21:12:16.070108   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:18.569568   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:15.308534   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:15.308581   73707 pod_ready.go:82] duration metric: took 4m0.007893146s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:15.308595   73707 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:15.308605   73707 pod_ready.go:39] duration metric: took 4m2.806797001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:15.308621   73707 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:12:15.308657   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:15.308722   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:15.353287   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:15.353348   73707 cri.go:89] found id: ""
	I0930 21:12:15.353359   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:15.353416   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.357602   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:15.357696   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:15.399289   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:15.399325   73707 cri.go:89] found id: ""
	I0930 21:12:15.399332   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:15.399377   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.404757   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:15.404832   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:15.454396   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:15.454423   73707 cri.go:89] found id: ""
	I0930 21:12:15.454433   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:15.454493   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.458660   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:15.458743   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:15.493941   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:15.493971   73707 cri.go:89] found id: ""
	I0930 21:12:15.493982   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:15.494055   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.498541   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:15.498628   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:15.535354   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:15.535385   73707 cri.go:89] found id: ""
	I0930 21:12:15.535395   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:15.535454   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.540097   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:15.540168   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:15.583969   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:15.583996   73707 cri.go:89] found id: ""
	I0930 21:12:15.584003   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:15.584051   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.589193   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:15.589260   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:15.629413   73707 cri.go:89] found id: ""
	I0930 21:12:15.629440   73707 logs.go:276] 0 containers: []
	W0930 21:12:15.629449   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:15.629454   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:15.629506   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:15.670129   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:15.670160   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:15.670166   73707 cri.go:89] found id: ""
	I0930 21:12:15.670175   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:15.670237   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.674227   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.678252   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:15.678276   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:15.758280   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:15.758319   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:15.778191   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:15.778222   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:15.930379   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:15.930422   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:15.966732   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:15.966759   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:16.004304   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:16.004337   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:16.043705   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:16.043733   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:16.600173   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:16.600210   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:16.651837   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:16.651868   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:16.695122   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:16.695155   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:16.737622   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:16.737671   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:16.772913   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:16.772944   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:16.808196   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:16.808224   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.368150   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:19.385771   73707 api_server.go:72] duration metric: took 4m14.101602019s to wait for apiserver process to appear ...
	I0930 21:12:19.385798   73707 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:12:19.385831   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:19.385889   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:19.421325   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:19.421354   73707 cri.go:89] found id: ""
	I0930 21:12:19.421364   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:19.421426   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.428045   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:19.428107   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:19.466034   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:19.466054   73707 cri.go:89] found id: ""
	I0930 21:12:19.466061   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:19.466102   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.470155   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:19.470222   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:19.504774   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:19.504799   73707 cri.go:89] found id: ""
	I0930 21:12:19.504806   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:19.504869   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.509044   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:19.509134   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:19.544204   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:19.544228   73707 cri.go:89] found id: ""
	I0930 21:12:19.544235   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:19.544293   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.549103   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:19.549194   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:19.591381   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:19.591416   73707 cri.go:89] found id: ""
	I0930 21:12:19.591425   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:19.591472   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.595522   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:19.595621   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:19.634816   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.634841   73707 cri.go:89] found id: ""
	I0930 21:12:19.634850   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:19.634894   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.639391   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:19.639450   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:19.675056   73707 cri.go:89] found id: ""
	I0930 21:12:19.675084   73707 logs.go:276] 0 containers: []
	W0930 21:12:19.675095   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:19.675102   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:19.675159   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:19.708641   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:19.708666   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:19.708672   73707 cri.go:89] found id: ""
	I0930 21:12:19.708682   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:19.708738   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.712636   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.716653   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:19.716680   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:19.785159   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:19.785203   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:19.823462   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:19.823490   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:19.856776   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:19.856808   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:19.893919   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:19.893948   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:19.930932   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:19.930978   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.988120   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:19.988164   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:20.027576   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:20.027618   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:20.041523   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:20.041557   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:20.157598   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:20.157630   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:20.213353   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:20.213384   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:20.254502   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:20.254533   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:17.824584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:17.824623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.862613   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:17.862643   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:17.915954   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:17.915992   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:17.929824   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:17.929853   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:17.999697   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:20.500449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:20.514042   73900 kubeadm.go:597] duration metric: took 4m1.91059878s to restartPrimaryControlPlane
	W0930 21:12:20.514119   73900 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 21:12:20.514158   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:12:21.675376   73900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.161176988s)
	I0930 21:12:21.675465   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:21.689467   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:12:21.698504   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:12:21.708418   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:12:21.708437   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:12:21.708483   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:12:21.716960   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:12:21.717019   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:12:21.727610   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:12:21.736212   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:12:21.736275   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:12:21.745512   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.754299   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:12:21.754366   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.763724   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:12:21.772521   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:12:21.772595   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:12:21.782980   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:12:21.850463   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:12:21.850558   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:12:21.991521   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:12:21.991706   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:12:21.991849   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:12:22.174876   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:12:22.177037   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:12:22.177155   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:12:22.177253   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:12:22.177379   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:12:22.178789   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:12:22.178860   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:12:22.178907   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:12:22.178961   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:12:22.179017   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:12:22.179139   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:12:22.179247   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:12:22.179310   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:12:22.179398   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:12:22.253256   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:12:22.661237   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:12:22.947987   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:12:23.170995   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:12:23.184583   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:12:23.185770   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:12:23.185813   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:12:23.334769   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:12:21.069777   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:23.070328   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:20.696951   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:20.696989   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:23.236734   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:12:23.241215   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 200:
	ok
	I0930 21:12:23.242629   73707 api_server.go:141] control plane version: v1.31.1
	I0930 21:12:23.242651   73707 api_server.go:131] duration metric: took 3.856847284s to wait for apiserver health ...
	I0930 21:12:23.242660   73707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:12:23.242680   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:23.242724   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:23.279601   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:23.279626   73707 cri.go:89] found id: ""
	I0930 21:12:23.279633   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:23.279692   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.283900   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:23.283977   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:23.320360   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:23.320397   73707 cri.go:89] found id: ""
	I0930 21:12:23.320410   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:23.320472   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.324745   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:23.324825   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:23.368001   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:23.368024   73707 cri.go:89] found id: ""
	I0930 21:12:23.368034   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:23.368095   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.372001   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:23.372077   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:23.408203   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:23.408234   73707 cri.go:89] found id: ""
	I0930 21:12:23.408242   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:23.408299   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.412328   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:23.412397   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:23.462142   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:23.462173   73707 cri.go:89] found id: ""
	I0930 21:12:23.462183   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:23.462247   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.466257   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:23.466336   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:23.509075   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:23.509098   73707 cri.go:89] found id: ""
	I0930 21:12:23.509109   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:23.509169   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.513362   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:23.513441   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:23.553711   73707 cri.go:89] found id: ""
	I0930 21:12:23.553738   73707 logs.go:276] 0 containers: []
	W0930 21:12:23.553746   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:23.553752   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:23.553797   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:23.599596   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:23.599629   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:23.599635   73707 cri.go:89] found id: ""
	I0930 21:12:23.599644   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:23.599699   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.603589   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.607827   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:23.607855   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:23.621046   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:23.621069   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:23.664703   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:23.664735   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:23.700614   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:23.700644   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:23.738113   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:23.738143   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:23.775706   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:23.775733   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:23.840419   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:23.840454   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:23.876827   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:23.876860   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:23.943636   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:23.943675   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:24.052729   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:24.052763   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:24.106526   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:24.106556   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:24.146914   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:24.146941   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:24.527753   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:24.527804   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:27.077689   73707 system_pods.go:59] 8 kube-system pods found
	I0930 21:12:27.077721   73707 system_pods.go:61] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running
	I0930 21:12:27.077726   73707 system_pods.go:61] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running
	I0930 21:12:27.077731   73707 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running
	I0930 21:12:27.077739   73707 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running
	I0930 21:12:27.077744   73707 system_pods.go:61] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running
	I0930 21:12:27.077749   73707 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running
	I0930 21:12:27.077759   73707 system_pods.go:61] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:27.077766   73707 system_pods.go:61] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running
	I0930 21:12:27.077774   73707 system_pods.go:74] duration metric: took 3.835107861s to wait for pod list to return data ...
	I0930 21:12:27.077783   73707 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:12:27.082269   73707 default_sa.go:45] found service account: "default"
	I0930 21:12:27.082292   73707 default_sa.go:55] duration metric: took 4.502111ms for default service account to be created ...
	I0930 21:12:27.082299   73707 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:12:27.086738   73707 system_pods.go:86] 8 kube-system pods found
	I0930 21:12:27.086764   73707 system_pods.go:89] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running
	I0930 21:12:27.086770   73707 system_pods.go:89] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running
	I0930 21:12:27.086775   73707 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running
	I0930 21:12:27.086781   73707 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running
	I0930 21:12:27.086784   73707 system_pods.go:89] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running
	I0930 21:12:27.086788   73707 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running
	I0930 21:12:27.086796   73707 system_pods.go:89] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:27.086803   73707 system_pods.go:89] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running
	I0930 21:12:27.086811   73707 system_pods.go:126] duration metric: took 4.506701ms to wait for k8s-apps to be running ...
	I0930 21:12:27.086820   73707 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:12:27.086868   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:27.102286   73707 system_svc.go:56] duration metric: took 15.455734ms WaitForService to wait for kubelet
	I0930 21:12:27.102325   73707 kubeadm.go:582] duration metric: took 4m21.818162682s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:12:27.102346   73707 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:12:27.105332   73707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:12:27.105354   73707 node_conditions.go:123] node cpu capacity is 2
	I0930 21:12:27.105364   73707 node_conditions.go:105] duration metric: took 3.013328ms to run NodePressure ...
	I0930 21:12:27.105375   73707 start.go:241] waiting for startup goroutines ...
	I0930 21:12:27.105382   73707 start.go:246] waiting for cluster config update ...
	I0930 21:12:27.105393   73707 start.go:255] writing updated cluster config ...
	I0930 21:12:27.105669   73707 ssh_runner.go:195] Run: rm -f paused
	I0930 21:12:27.156804   73707 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:12:27.158887   73707 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-291511" cluster and "default" namespace by default
	I0930 21:12:23.336604   73900 out.go:235]   - Booting up control plane ...
	I0930 21:12:23.336747   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:12:23.345737   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:12:23.346784   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:12:23.347559   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:12:23.351009   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:12:25.568654   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:27.569042   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:29.570978   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:32.069065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:34.069347   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:36.568228   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:38.569351   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:40.569552   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:43.069456   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:45.569254   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:47.569647   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:49.569997   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:52.069284   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:54.069870   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:54.563572   73256 pod_ready.go:82] duration metric: took 4m0.000782781s for pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:54.563605   73256 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:54.563620   73256 pod_ready.go:39] duration metric: took 4m9.49309261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:54.563643   73256 kubeadm.go:597] duration metric: took 4m18.399318281s to restartPrimaryControlPlane
	W0930 21:12:54.563698   73256 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 21:12:54.563721   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:13:03.351822   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:13:03.352632   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:03.352833   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:08.353230   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:08.353429   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:20.634441   73256 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.070691776s)
	I0930 21:13:20.634529   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:13:20.650312   73256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:13:20.661782   73256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:13:20.671436   73256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:13:20.671463   73256 kubeadm.go:157] found existing configuration files:
	
	I0930 21:13:20.671504   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:13:20.681860   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:13:20.681934   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:13:20.692529   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:13:20.701507   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:13:20.701585   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:13:20.711211   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:13:20.721856   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:13:20.721928   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:13:20.733194   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:13:20.743887   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:13:20.743955   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:13:20.753546   73256 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:13:20.799739   73256 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 21:13:20.799812   73256 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:13:20.906464   73256 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:13:20.906569   73256 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:13:20.906647   73256 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 21:13:20.919451   73256 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:13:20.921440   73256 out.go:235]   - Generating certificates and keys ...
	I0930 21:13:20.921550   73256 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:13:20.921645   73256 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:13:20.921758   73256 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:13:20.921845   73256 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:13:20.921945   73256 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:13:20.922021   73256 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:13:20.922117   73256 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:13:20.922190   73256 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:13:20.922262   73256 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:13:20.922336   73256 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:13:20.922370   73256 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:13:20.922459   73256 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:13:21.079731   73256 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:13:21.214199   73256 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 21:13:21.344405   73256 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:13:21.605006   73256 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:13:21.718432   73256 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:13:21.718967   73256 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:13:21.723434   73256 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:13:18.354150   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:18.354468   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:21.725304   73256 out.go:235]   - Booting up control plane ...
	I0930 21:13:21.725435   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:13:21.725526   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:13:21.725637   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:13:21.743582   73256 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:13:21.749533   73256 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:13:21.749605   73256 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:13:21.873716   73256 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 21:13:21.873867   73256 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 21:13:22.375977   73256 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.402537ms
	I0930 21:13:22.376098   73256 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 21:13:27.379510   73256 kubeadm.go:310] [api-check] The API server is healthy after 5.001265494s
	I0930 21:13:27.392047   73256 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 21:13:27.409550   73256 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 21:13:27.447693   73256 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 21:13:27.447896   73256 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-256103 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 21:13:27.462338   73256 kubeadm.go:310] [bootstrap-token] Using token: k5ffj3.6sqmy7prwrlhrg7s
	I0930 21:13:27.463967   73256 out.go:235]   - Configuring RBAC rules ...
	I0930 21:13:27.464076   73256 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 21:13:27.472107   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 21:13:27.481172   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 21:13:27.485288   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 21:13:27.492469   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 21:13:27.496822   73256 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 21:13:27.789372   73256 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 21:13:28.210679   73256 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 21:13:28.784869   73256 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 21:13:28.785859   73256 kubeadm.go:310] 
	I0930 21:13:28.785954   73256 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 21:13:28.785967   73256 kubeadm.go:310] 
	I0930 21:13:28.786045   73256 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 21:13:28.786077   73256 kubeadm.go:310] 
	I0930 21:13:28.786121   73256 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 21:13:28.786219   73256 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 21:13:28.786286   73256 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 21:13:28.786304   73256 kubeadm.go:310] 
	I0930 21:13:28.786395   73256 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 21:13:28.786405   73256 kubeadm.go:310] 
	I0930 21:13:28.786464   73256 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 21:13:28.786474   73256 kubeadm.go:310] 
	I0930 21:13:28.786546   73256 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 21:13:28.786658   73256 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 21:13:28.786754   73256 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 21:13:28.786763   73256 kubeadm.go:310] 
	I0930 21:13:28.786870   73256 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 21:13:28.786991   73256 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 21:13:28.787000   73256 kubeadm.go:310] 
	I0930 21:13:28.787122   73256 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k5ffj3.6sqmy7prwrlhrg7s \
	I0930 21:13:28.787240   73256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 21:13:28.787274   73256 kubeadm.go:310] 	--control-plane 
	I0930 21:13:28.787290   73256 kubeadm.go:310] 
	I0930 21:13:28.787415   73256 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 21:13:28.787425   73256 kubeadm.go:310] 
	I0930 21:13:28.787547   73256 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k5ffj3.6sqmy7prwrlhrg7s \
	I0930 21:13:28.787713   73256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 21:13:28.788805   73256 kubeadm.go:310] W0930 21:13:20.776526    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:13:28.789058   73256 kubeadm.go:310] W0930 21:13:20.777323    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:13:28.789158   73256 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:13:28.789178   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:13:28.789187   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:13:28.791049   73256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:13:28.792381   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:13:28.802872   73256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:13:28.819952   73256 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:13:28.820054   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:28.820070   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-256103 minikube.k8s.io/updated_at=2024_09_30T21_13_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=embed-certs-256103 minikube.k8s.io/primary=true
	I0930 21:13:28.859770   73256 ops.go:34] apiserver oom_adj: -16
	I0930 21:13:29.026274   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:29.526992   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:30.026700   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:30.526962   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:31.027165   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:31.526632   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:32.027019   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:32.526522   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:33.026739   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:33.116028   73256 kubeadm.go:1113] duration metric: took 4.296036786s to wait for elevateKubeSystemPrivileges
	I0930 21:13:33.116067   73256 kubeadm.go:394] duration metric: took 4m57.005787187s to StartCluster
	I0930 21:13:33.116088   73256 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:13:33.116175   73256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:13:33.117855   73256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:13:33.118142   73256 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:13:33.118263   73256 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:13:33.118420   73256 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-256103"
	I0930 21:13:33.118373   73256 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:13:33.118446   73256 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-256103"
	I0930 21:13:33.118442   73256 addons.go:69] Setting default-storageclass=true in profile "embed-certs-256103"
	W0930 21:13:33.118453   73256 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:13:33.118464   73256 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-256103"
	I0930 21:13:33.118482   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.118515   73256 addons.go:69] Setting metrics-server=true in profile "embed-certs-256103"
	I0930 21:13:33.118554   73256 addons.go:234] Setting addon metrics-server=true in "embed-certs-256103"
	W0930 21:13:33.118564   73256 addons.go:243] addon metrics-server should already be in state true
	I0930 21:13:33.118594   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.118807   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118840   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.118880   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118926   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.118941   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118965   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.120042   73256 out.go:177] * Verifying Kubernetes components...
	I0930 21:13:33.121706   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:13:33.136554   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0930 21:13:33.137096   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.137304   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0930 21:13:33.137664   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.137696   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.137789   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.138013   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.138176   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.138317   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.138336   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.139163   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37389
	I0930 21:13:33.139176   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.139733   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.139903   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.139955   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.140284   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.140311   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.140780   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.141336   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.141375   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.141814   73256 addons.go:234] Setting addon default-storageclass=true in "embed-certs-256103"
	W0930 21:13:33.141832   73256 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:13:33.141857   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.142143   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.142177   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.161937   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0930 21:13:33.162096   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33657
	I0930 21:13:33.162249   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42531
	I0930 21:13:33.162491   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.162536   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.162837   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.163017   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163028   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163030   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163045   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163254   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163265   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163362   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.163417   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.163864   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.163899   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.164101   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.164154   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.164356   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.166460   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.166673   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.168464   73256 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:13:33.168631   73256 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:13:33.169822   73256 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:13:33.169840   73256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:13:33.169857   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.169937   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:13:33.169947   73256 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:13:33.169963   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.174613   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.174653   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175236   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.175265   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175372   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.175405   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175667   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.176048   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.176051   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.176299   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.176299   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.176476   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.176684   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.176685   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.180520   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I0930 21:13:33.180968   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.181564   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.181588   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.181938   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.182136   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.183803   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.184001   73256 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:13:33.184017   73256 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:13:33.184035   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.186565   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.186964   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.186996   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.187311   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.187481   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.187797   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.187937   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.337289   73256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:13:33.360186   73256 node_ready.go:35] waiting up to 6m0s for node "embed-certs-256103" to be "Ready" ...
	I0930 21:13:33.372799   73256 node_ready.go:49] node "embed-certs-256103" has status "Ready":"True"
	I0930 21:13:33.372828   73256 node_ready.go:38] duration metric: took 12.601736ms for node "embed-certs-256103" to be "Ready" ...
	I0930 21:13:33.372837   73256 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:13:33.379694   73256 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:33.462144   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:13:33.500072   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:13:33.500102   73256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:13:33.524789   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:13:33.548931   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:13:33.548955   73256 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:13:33.604655   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:13:33.604682   73256 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:13:33.648687   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:13:34.533493   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.008666954s)
	I0930 21:13:34.533555   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.533566   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.533856   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.533870   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.533884   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.533892   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.533900   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.534108   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.534126   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.534149   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.535651   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.073475648s)
	I0930 21:13:34.535695   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.535706   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.535926   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.536001   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.536014   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.536030   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.535981   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.537450   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.537470   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.537480   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.564363   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.564394   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.564715   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.564739   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968266   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.319532564s)
	I0930 21:13:34.968330   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.968350   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.968642   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.968665   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968674   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.968673   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.968681   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.968944   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.968969   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968973   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.968979   73256 addons.go:475] Verifying addon metrics-server=true in "embed-certs-256103"
	I0930 21:13:34.970656   73256 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0930 21:13:34.971966   73256 addons.go:510] duration metric: took 1.853709741s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0930 21:13:35.387687   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:37.388374   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:39.886425   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:41.885713   73256 pod_ready.go:93] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.885737   73256 pod_ready.go:82] duration metric: took 8.506004979s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.885746   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.891032   73256 pod_ready.go:93] pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.891052   73256 pod_ready.go:82] duration metric: took 5.300379ms for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.891061   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.895332   73256 pod_ready.go:93] pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.895349   73256 pod_ready.go:82] duration metric: took 4.282199ms for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.895357   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-glbsg" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.899518   73256 pod_ready.go:93] pod "kube-proxy-glbsg" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.899556   73256 pod_ready.go:82] duration metric: took 4.191815ms for pod "kube-proxy-glbsg" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.899567   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.904184   73256 pod_ready.go:93] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.904203   73256 pod_ready.go:82] duration metric: took 4.628533ms for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.904209   73256 pod_ready.go:39] duration metric: took 8.531361398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:13:41.904221   73256 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:13:41.904262   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:13:41.919570   73256 api_server.go:72] duration metric: took 8.801387692s to wait for apiserver process to appear ...
	I0930 21:13:41.919591   73256 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:13:41.919607   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:13:41.923810   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I0930 21:13:41.924633   73256 api_server.go:141] control plane version: v1.31.1
	I0930 21:13:41.924651   73256 api_server.go:131] duration metric: took 5.054857ms to wait for apiserver health ...
	I0930 21:13:41.924659   73256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:13:42.086431   73256 system_pods.go:59] 9 kube-system pods found
	I0930 21:13:42.086468   73256 system_pods.go:61] "coredns-7c65d6cfc9-gt5tt" [165faaf0-866c-4097-9bdb-ed58fe8d7395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.086480   73256 system_pods.go:61] "coredns-7c65d6cfc9-sgsbn" [c97fdb50-c6a0-4ef8-8c01-ea45ed18b72a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.086488   73256 system_pods.go:61] "etcd-embed-certs-256103" [6aac0706-7dbd-4655-b261-68877299d81a] Running
	I0930 21:13:42.086494   73256 system_pods.go:61] "kube-apiserver-embed-certs-256103" [6c8e3157-ec97-4a85-8947-ca7541c19b1c] Running
	I0930 21:13:42.086500   73256 system_pods.go:61] "kube-controller-manager-embed-certs-256103" [1e3f76d1-d343-4127-aad9-8a5a8e589a43] Running
	I0930 21:13:42.086505   73256 system_pods.go:61] "kube-proxy-glbsg" [f68e378f-ce0f-4603-bd8e-93334f04f7a7] Running
	I0930 21:13:42.086510   73256 system_pods.go:61] "kube-scheduler-embed-certs-256103" [29f55c6f-9603-4cd2-a798-0ff2362b7607] Running
	I0930 21:13:42.086518   73256 system_pods.go:61] "metrics-server-6867b74b74-5mhkh" [470424ec-bb66-4d62-904d-0d4ad93fa5bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:13:42.086525   73256 system_pods.go:61] "storage-provisioner" [a07a5a12-7420-4b57-b79d-982f4bb48232] Running
	I0930 21:13:42.086538   73256 system_pods.go:74] duration metric: took 161.870121ms to wait for pod list to return data ...
	I0930 21:13:42.086559   73256 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:13:42.284282   73256 default_sa.go:45] found service account: "default"
	I0930 21:13:42.284307   73256 default_sa.go:55] duration metric: took 197.73827ms for default service account to be created ...
	I0930 21:13:42.284316   73256 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:13:42.486445   73256 system_pods.go:86] 9 kube-system pods found
	I0930 21:13:42.486478   73256 system_pods.go:89] "coredns-7c65d6cfc9-gt5tt" [165faaf0-866c-4097-9bdb-ed58fe8d7395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.486489   73256 system_pods.go:89] "coredns-7c65d6cfc9-sgsbn" [c97fdb50-c6a0-4ef8-8c01-ea45ed18b72a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.486497   73256 system_pods.go:89] "etcd-embed-certs-256103" [6aac0706-7dbd-4655-b261-68877299d81a] Running
	I0930 21:13:42.486503   73256 system_pods.go:89] "kube-apiserver-embed-certs-256103" [6c8e3157-ec97-4a85-8947-ca7541c19b1c] Running
	I0930 21:13:42.486509   73256 system_pods.go:89] "kube-controller-manager-embed-certs-256103" [1e3f76d1-d343-4127-aad9-8a5a8e589a43] Running
	I0930 21:13:42.486513   73256 system_pods.go:89] "kube-proxy-glbsg" [f68e378f-ce0f-4603-bd8e-93334f04f7a7] Running
	I0930 21:13:42.486518   73256 system_pods.go:89] "kube-scheduler-embed-certs-256103" [29f55c6f-9603-4cd2-a798-0ff2362b7607] Running
	I0930 21:13:42.486526   73256 system_pods.go:89] "metrics-server-6867b74b74-5mhkh" [470424ec-bb66-4d62-904d-0d4ad93fa5bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:13:42.486533   73256 system_pods.go:89] "storage-provisioner" [a07a5a12-7420-4b57-b79d-982f4bb48232] Running
	I0930 21:13:42.486542   73256 system_pods.go:126] duration metric: took 202.220435ms to wait for k8s-apps to be running ...
	I0930 21:13:42.486552   73256 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:13:42.486601   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:13:42.501286   73256 system_svc.go:56] duration metric: took 14.699273ms WaitForService to wait for kubelet
	I0930 21:13:42.501315   73256 kubeadm.go:582] duration metric: took 9.38313627s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:13:42.501332   73256 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:13:42.685282   73256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:13:42.685314   73256 node_conditions.go:123] node cpu capacity is 2
	I0930 21:13:42.685326   73256 node_conditions.go:105] duration metric: took 183.989963ms to run NodePressure ...
	I0930 21:13:42.685346   73256 start.go:241] waiting for startup goroutines ...
	I0930 21:13:42.685356   73256 start.go:246] waiting for cluster config update ...
	I0930 21:13:42.685371   73256 start.go:255] writing updated cluster config ...
	I0930 21:13:42.685664   73256 ssh_runner.go:195] Run: rm -f paused
	I0930 21:13:42.734778   73256 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:13:42.736658   73256 out.go:177] * Done! kubectl is now configured to use "embed-certs-256103" cluster and "default" namespace by default
	I0930 21:13:38.355123   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:38.355330   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357098   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:14:18.357396   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357419   73900 kubeadm.go:310] 
	I0930 21:14:18.357473   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:14:18.357541   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:14:18.357554   73900 kubeadm.go:310] 
	I0930 21:14:18.357609   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:14:18.357659   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:14:18.357801   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:14:18.357817   73900 kubeadm.go:310] 
	I0930 21:14:18.357964   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:14:18.357996   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:14:18.358028   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:14:18.358039   73900 kubeadm.go:310] 
	I0930 21:14:18.358174   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:14:18.358318   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:14:18.358331   73900 kubeadm.go:310] 
	I0930 21:14:18.358510   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:14:18.358646   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:14:18.358764   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:14:18.358866   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:14:18.358882   73900 kubeadm.go:310] 
	I0930 21:14:18.359454   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:14:18.359595   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:14:18.359681   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0930 21:14:18.359797   73900 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0930 21:14:18.359841   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:14:18.820244   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:14:18.834938   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:14:18.844779   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:14:18.844803   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:14:18.844856   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:14:18.853738   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:14:18.853811   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:14:18.863366   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:14:18.872108   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:14:18.872164   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:14:18.881818   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.890916   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:14:18.890969   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.900075   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:14:18.908449   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:14:18.908520   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:14:18.917163   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:14:18.983181   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:14:18.983233   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:14:19.121356   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:14:19.121545   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:14:19.121674   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:14:19.306639   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:14:19.309593   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:14:19.309683   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:14:19.309748   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:14:19.309870   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:14:19.309957   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:14:19.310040   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:14:19.310119   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:14:19.310209   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:14:19.310292   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:14:19.310404   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:14:19.310511   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:14:19.310567   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:14:19.310654   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:14:19.453872   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:14:19.621232   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:14:19.797694   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:14:19.886897   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:14:19.909016   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:14:19.910536   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:14:19.910617   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:14:20.052878   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:14:20.054739   73900 out.go:235]   - Booting up control plane ...
	I0930 21:14:20.054881   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:14:20.068419   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:14:20.068512   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:14:20.068697   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:14:20.072015   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:15:00.073988   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:15:00.074795   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:00.075068   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:05.075810   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:05.076061   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:15.076695   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:15.076928   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:35.077652   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:35.077862   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.076816   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:16:15.077063   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.077082   73900 kubeadm.go:310] 
	I0930 21:16:15.077136   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:16:15.077188   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:16:15.077198   73900 kubeadm.go:310] 
	I0930 21:16:15.077246   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:16:15.077298   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:16:15.077425   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:16:15.077442   73900 kubeadm.go:310] 
	I0930 21:16:15.077605   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:16:15.077651   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:16:15.077710   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:16:15.077718   73900 kubeadm.go:310] 
	I0930 21:16:15.077851   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:16:15.077997   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:16:15.078013   73900 kubeadm.go:310] 
	I0930 21:16:15.078143   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:16:15.078229   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:16:15.078309   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:16:15.078419   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:16:15.078431   73900 kubeadm.go:310] 
	I0930 21:16:15.079235   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:16:15.079365   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:16:15.079442   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0930 21:16:15.079572   73900 kubeadm.go:394] duration metric: took 7m56.529269567s to StartCluster
	I0930 21:16:15.079639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:16:15.079713   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:16:15.122057   73900 cri.go:89] found id: ""
	I0930 21:16:15.122086   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.122098   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:16:15.122105   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:16:15.122166   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:16:15.156244   73900 cri.go:89] found id: ""
	I0930 21:16:15.156278   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.156289   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:16:15.156297   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:16:15.156357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:16:15.188952   73900 cri.go:89] found id: ""
	I0930 21:16:15.188977   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.188989   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:16:15.188996   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:16:15.189058   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:16:15.219400   73900 cri.go:89] found id: ""
	I0930 21:16:15.219427   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.219435   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:16:15.219441   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:16:15.219501   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:16:15.252049   73900 cri.go:89] found id: ""
	I0930 21:16:15.252078   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.252086   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:16:15.252093   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:16:15.252150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:16:15.286560   73900 cri.go:89] found id: ""
	I0930 21:16:15.286594   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.286605   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:16:15.286614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:16:15.286679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:16:15.319140   73900 cri.go:89] found id: ""
	I0930 21:16:15.319178   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.319187   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:16:15.319192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:16:15.319245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:16:15.351299   73900 cri.go:89] found id: ""
	I0930 21:16:15.351322   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.351330   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:16:15.351339   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:16:15.351350   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:16:15.402837   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:16:15.402882   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:16:15.417111   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:16:15.417140   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:16:15.492593   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:16:15.492614   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:16:15.492627   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:16:15.621646   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:16:15.621681   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0930 21:16:15.660480   73900 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0930 21:16:15.660528   73900 out.go:270] * 
	W0930 21:16:15.660580   73900 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.660595   73900 out.go:270] * 
	W0930 21:16:15.661387   73900 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 21:16:15.665510   73900 out.go:201] 
	W0930 21:16:15.667332   73900 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.667373   73900 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0930 21:16:15.667390   73900 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0930 21:16:15.668812   73900 out.go:201] 
	
	
	==> CRI-O <==
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.053442570Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731521053412860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e324b79e-3db5-487a-b8b7-ba8518c6542e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.053962129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aee8b47f-71a5-4df5-983c-9118037c1bdc name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.054058515Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aee8b47f-71a5-4df5-983c-9118037c1bdc name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.054102544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aee8b47f-71a5-4df5-983c-9118037c1bdc name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.084620312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c04bf55-6585-4410-946e-f46d2efe8382 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.084706389Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c04bf55-6585-4410-946e-f46d2efe8382 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.085855592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc4f5aa3-d5ed-4b0b-8ea2-a4b14a85ba66 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.086312658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731521086279587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc4f5aa3-d5ed-4b0b-8ea2-a4b14a85ba66 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.086850036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a28cdd08-f94b-4d90-8db4-f90200cd8567 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.086896033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a28cdd08-f94b-4d90-8db4-f90200cd8567 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.086931016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a28cdd08-f94b-4d90-8db4-f90200cd8567 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.119075857Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7041006-4180-4c05-b8c4-41194233c665 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.119149490Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7041006-4180-4c05-b8c4-41194233c665 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.120666367Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e033009e-d247-492b-ae35-cf5e7a971e50 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.121189279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731521121165375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e033009e-d247-492b-ae35-cf5e7a971e50 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.121956716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c262f414-aa15-4007-8171-0fd27bdead21 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.122012960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c262f414-aa15-4007-8171-0fd27bdead21 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.122085108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c262f414-aa15-4007-8171-0fd27bdead21 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.152676701Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2187d309-3a36-448a-a132-ac1745acae74 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.152750750Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2187d309-3a36-448a-a132-ac1745acae74 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.153870715Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4071e37-5d66-405e-89b6-91b98c8d789d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.154326417Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731521154298617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4071e37-5d66-405e-89b6-91b98c8d789d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.154856685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a6e0533-a13f-4aa3-8634-7935c13ae31e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.154908351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a6e0533-a13f-4aa3-8634-7935c13ae31e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:25:21 old-k8s-version-621406 crio[636]: time="2024-09-30 21:25:21.154943419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9a6e0533-a13f-4aa3-8634-7935c13ae31e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep30 21:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055405] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042801] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.194174] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep30 21:08] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.574996] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.760000] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.059497] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069559] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.192698] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.144274] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.303445] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +6.753345] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.065939] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.694211] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[ +12.297674] kauditd_printk_skb: 46 callbacks suppressed
	[Sep30 21:12] systemd-fstab-generator[5042]: Ignoring "noauto" option for root device
	[Sep30 21:14] systemd-fstab-generator[5322]: Ignoring "noauto" option for root device
	[  +0.065961] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:25:21 up 17 min,  0 users,  load average: 0.00, 0.03, 0.03
	Linux old-k8s-version-621406 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000b49560)
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]: goroutine 154 [select]:
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000ba3ef0, 0x4f0ac20, 0xc000113ea0, 0x1, 0xc00009e0c0)
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000a522a0, 0xc00009e0c0)
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a78480, 0xc000b7f6e0)
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 30 21:25:15 old-k8s-version-621406 kubelet[6501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 30 21:25:15 old-k8s-version-621406 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 30 21:25:15 old-k8s-version-621406 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 30 21:25:16 old-k8s-version-621406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Sep 30 21:25:16 old-k8s-version-621406 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 30 21:25:16 old-k8s-version-621406 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 30 21:25:16 old-k8s-version-621406 kubelet[6510]: I0930 21:25:16.614287    6510 server.go:416] Version: v1.20.0
	Sep 30 21:25:16 old-k8s-version-621406 kubelet[6510]: I0930 21:25:16.614719    6510 server.go:837] Client rotation is on, will bootstrap in background
	Sep 30 21:25:16 old-k8s-version-621406 kubelet[6510]: I0930 21:25:16.616917    6510 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 30 21:25:16 old-k8s-version-621406 kubelet[6510]: I0930 21:25:16.618108    6510 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 30 21:25:16 old-k8s-version-621406 kubelet[6510]: W0930 21:25:16.618176    6510 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-621406 -n old-k8s-version-621406
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-621406 -n old-k8s-version-621406: exit status 2 (227.124896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-621406" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (420.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997816 -n no-preload-997816
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-30 21:28:21.866820897 +0000 UTC m=+6622.571577021
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-997816 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-997816 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.865µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-997816 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997816 -n no-preload-997816
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-997816 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-997816 logs -n 25: (1.11324934s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-207733 sudo find                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo crio                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-207733                                      | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-741890 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | disable-driver-mounts-741890                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 21:00 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-256103            | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-997816             | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-291511  | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-621406        | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256103                 | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-997816                  | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-291511       | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:12 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-621406             | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:27 UTC | 30 Sep 24 21:27 UTC |
	| start   | -p newest-cni-921796 --memory=2200 --alsologtostderr   | newest-cni-921796            | jenkins | v1.34.0 | 30 Sep 24 21:27 UTC | 30 Sep 24 21:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-921796             | newest-cni-921796            | jenkins | v1.34.0 | 30 Sep 24 21:28 UTC | 30 Sep 24 21:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-921796                                   | newest-cni-921796            | jenkins | v1.34.0 | 30 Sep 24 21:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 21:27:31
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 21:27:31.114793   80623 out.go:345] Setting OutFile to fd 1 ...
	I0930 21:27:31.114916   80623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:27:31.114925   80623 out.go:358] Setting ErrFile to fd 2...
	I0930 21:27:31.114930   80623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:27:31.115102   80623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 21:27:31.115793   80623 out.go:352] Setting JSON to false
	I0930 21:27:31.116790   80623 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7794,"bootTime":1727723857,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 21:27:31.116911   80623 start.go:139] virtualization: kvm guest
	I0930 21:27:31.119370   80623 out.go:177] * [newest-cni-921796] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 21:27:31.121001   80623 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 21:27:31.121014   80623 notify.go:220] Checking for updates...
	I0930 21:27:31.123885   80623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 21:27:31.125354   80623 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:27:31.126795   80623 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 21:27:31.128236   80623 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 21:27:31.129603   80623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 21:27:31.131184   80623 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:27:31.131277   80623 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:27:31.131363   80623 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:27:31.131428   80623 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 21:27:31.168930   80623 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 21:27:31.170428   80623 start.go:297] selected driver: kvm2
	I0930 21:27:31.170445   80623 start.go:901] validating driver "kvm2" against <nil>
	I0930 21:27:31.170456   80623 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 21:27:31.171107   80623 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:27:31.171196   80623 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 21:27:31.187024   80623 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 21:27:31.187073   80623 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0930 21:27:31.187140   80623 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0930 21:27:31.187446   80623 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0930 21:27:31.187504   80623 cni.go:84] Creating CNI manager for ""
	I0930 21:27:31.187582   80623 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:27:31.187597   80623 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 21:27:31.187695   80623 start.go:340] cluster config:
	{Name:newest-cni-921796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-921796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:27:31.187808   80623 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:27:31.190042   80623 out.go:177] * Starting "newest-cni-921796" primary control-plane node in "newest-cni-921796" cluster
	I0930 21:27:31.191430   80623 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:27:31.191487   80623 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 21:27:31.191501   80623 cache.go:56] Caching tarball of preloaded images
	I0930 21:27:31.191639   80623 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 21:27:31.191656   80623 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 21:27:31.191779   80623 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/config.json ...
	I0930 21:27:31.191805   80623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/config.json: {Name:mkb56310a04789b1759231d88312c8e06cf22f3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:27:31.191988   80623 start.go:360] acquireMachinesLock for newest-cni-921796: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:27:31.192040   80623 start.go:364] duration metric: took 30.478µs to acquireMachinesLock for "newest-cni-921796"
	I0930 21:27:31.192063   80623 start.go:93] Provisioning new machine with config: &{Name:newest-cni-921796 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-921796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:27:31.192151   80623 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 21:27:31.193966   80623 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 21:27:31.194126   80623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:27:31.194174   80623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:27:31.209879   80623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42927
	I0930 21:27:31.210429   80623 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:27:31.211120   80623 main.go:141] libmachine: Using API Version  1
	I0930 21:27:31.211141   80623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:27:31.211454   80623 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:27:31.211650   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetMachineName
	I0930 21:27:31.211845   80623 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:27:31.211991   80623 start.go:159] libmachine.API.Create for "newest-cni-921796" (driver="kvm2")
	I0930 21:27:31.212018   80623 client.go:168] LocalClient.Create starting
	I0930 21:27:31.212062   80623 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem
	I0930 21:27:31.212105   80623 main.go:141] libmachine: Decoding PEM data...
	I0930 21:27:31.212128   80623 main.go:141] libmachine: Parsing certificate...
	I0930 21:27:31.212190   80623 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem
	I0930 21:27:31.212221   80623 main.go:141] libmachine: Decoding PEM data...
	I0930 21:27:31.212238   80623 main.go:141] libmachine: Parsing certificate...
	I0930 21:27:31.212262   80623 main.go:141] libmachine: Running pre-create checks...
	I0930 21:27:31.212273   80623 main.go:141] libmachine: (newest-cni-921796) Calling .PreCreateCheck
	I0930 21:27:31.212680   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetConfigRaw
	I0930 21:27:31.213089   80623 main.go:141] libmachine: Creating machine...
	I0930 21:27:31.213106   80623 main.go:141] libmachine: (newest-cni-921796) Calling .Create
	I0930 21:27:31.213247   80623 main.go:141] libmachine: (newest-cni-921796) Creating KVM machine...
	I0930 21:27:31.214520   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found existing default KVM network
	I0930 21:27:31.215700   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:31.215542   80645 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3f:ad:f9} reservation:<nil>}
	I0930 21:27:31.216491   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:31.216429   80645 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:88:e2:98} reservation:<nil>}
	I0930 21:27:31.217175   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:31.217087   80645 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:40:04:75} reservation:<nil>}
	I0930 21:27:31.218141   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:31.218074   80645 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00037efc0}
	I0930 21:27:31.218163   80623 main.go:141] libmachine: (newest-cni-921796) DBG | created network xml: 
	I0930 21:27:31.218174   80623 main.go:141] libmachine: (newest-cni-921796) DBG | <network>
	I0930 21:27:31.218186   80623 main.go:141] libmachine: (newest-cni-921796) DBG |   <name>mk-newest-cni-921796</name>
	I0930 21:27:31.218204   80623 main.go:141] libmachine: (newest-cni-921796) DBG |   <dns enable='no'/>
	I0930 21:27:31.218214   80623 main.go:141] libmachine: (newest-cni-921796) DBG |   
	I0930 21:27:31.218221   80623 main.go:141] libmachine: (newest-cni-921796) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0930 21:27:31.218232   80623 main.go:141] libmachine: (newest-cni-921796) DBG |     <dhcp>
	I0930 21:27:31.218247   80623 main.go:141] libmachine: (newest-cni-921796) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0930 21:27:31.218254   80623 main.go:141] libmachine: (newest-cni-921796) DBG |     </dhcp>
	I0930 21:27:31.218260   80623 main.go:141] libmachine: (newest-cni-921796) DBG |   </ip>
	I0930 21:27:31.218266   80623 main.go:141] libmachine: (newest-cni-921796) DBG |   
	I0930 21:27:31.218274   80623 main.go:141] libmachine: (newest-cni-921796) DBG | </network>
	I0930 21:27:31.218280   80623 main.go:141] libmachine: (newest-cni-921796) DBG | 
	I0930 21:27:31.224063   80623 main.go:141] libmachine: (newest-cni-921796) DBG | trying to create private KVM network mk-newest-cni-921796 192.168.72.0/24...
	I0930 21:27:31.300945   80623 main.go:141] libmachine: (newest-cni-921796) Setting up store path in /home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796 ...
	I0930 21:27:31.300985   80623 main.go:141] libmachine: (newest-cni-921796) Building disk image from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 21:27:31.301000   80623 main.go:141] libmachine: (newest-cni-921796) DBG | private KVM network mk-newest-cni-921796 192.168.72.0/24 created
	I0930 21:27:31.301019   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:31.300874   80645 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 21:27:31.301041   80623 main.go:141] libmachine: (newest-cni-921796) Downloading /home/jenkins/minikube-integration/19736-7672/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 21:27:31.567622   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:31.567427   80645 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796/id_rsa...
	I0930 21:27:32.073021   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:32.072878   80645 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796/newest-cni-921796.rawdisk...
	I0930 21:27:32.073055   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Writing magic tar header
	I0930 21:27:32.073074   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Writing SSH key tar header
	I0930 21:27:32.073085   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:32.072990   80645 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796 ...
	I0930 21:27:32.073106   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796
	I0930 21:27:32.073120   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube/machines
	I0930 21:27:32.073128   80623 main.go:141] libmachine: (newest-cni-921796) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796 (perms=drwx------)
	I0930 21:27:32.073155   80623 main.go:141] libmachine: (newest-cni-921796) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube/machines (perms=drwxr-xr-x)
	I0930 21:27:32.073171   80623 main.go:141] libmachine: (newest-cni-921796) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672/.minikube (perms=drwxr-xr-x)
	I0930 21:27:32.073182   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 21:27:32.073193   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-7672
	I0930 21:27:32.073202   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 21:27:32.073212   80623 main.go:141] libmachine: (newest-cni-921796) Setting executable bit set on /home/jenkins/minikube-integration/19736-7672 (perms=drwxrwxr-x)
	I0930 21:27:32.073224   80623 main.go:141] libmachine: (newest-cni-921796) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 21:27:32.073232   80623 main.go:141] libmachine: (newest-cni-921796) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 21:27:32.073242   80623 main.go:141] libmachine: (newest-cni-921796) Creating domain...
	I0930 21:27:32.073281   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Checking permissions on dir: /home/jenkins
	I0930 21:27:32.073306   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Checking permissions on dir: /home
	I0930 21:27:32.073320   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Skipping /home - not owner
	I0930 21:27:32.074327   80623 main.go:141] libmachine: (newest-cni-921796) define libvirt domain using xml: 
	I0930 21:27:32.074350   80623 main.go:141] libmachine: (newest-cni-921796) <domain type='kvm'>
	I0930 21:27:32.074362   80623 main.go:141] libmachine: (newest-cni-921796)   <name>newest-cni-921796</name>
	I0930 21:27:32.074384   80623 main.go:141] libmachine: (newest-cni-921796)   <memory unit='MiB'>2200</memory>
	I0930 21:27:32.074397   80623 main.go:141] libmachine: (newest-cni-921796)   <vcpu>2</vcpu>
	I0930 21:27:32.074403   80623 main.go:141] libmachine: (newest-cni-921796)   <features>
	I0930 21:27:32.074421   80623 main.go:141] libmachine: (newest-cni-921796)     <acpi/>
	I0930 21:27:32.074431   80623 main.go:141] libmachine: (newest-cni-921796)     <apic/>
	I0930 21:27:32.074443   80623 main.go:141] libmachine: (newest-cni-921796)     <pae/>
	I0930 21:27:32.074451   80623 main.go:141] libmachine: (newest-cni-921796)     
	I0930 21:27:32.074459   80623 main.go:141] libmachine: (newest-cni-921796)   </features>
	I0930 21:27:32.074467   80623 main.go:141] libmachine: (newest-cni-921796)   <cpu mode='host-passthrough'>
	I0930 21:27:32.074478   80623 main.go:141] libmachine: (newest-cni-921796)   
	I0930 21:27:32.074487   80623 main.go:141] libmachine: (newest-cni-921796)   </cpu>
	I0930 21:27:32.074498   80623 main.go:141] libmachine: (newest-cni-921796)   <os>
	I0930 21:27:32.074508   80623 main.go:141] libmachine: (newest-cni-921796)     <type>hvm</type>
	I0930 21:27:32.074518   80623 main.go:141] libmachine: (newest-cni-921796)     <boot dev='cdrom'/>
	I0930 21:27:32.074537   80623 main.go:141] libmachine: (newest-cni-921796)     <boot dev='hd'/>
	I0930 21:27:32.074548   80623 main.go:141] libmachine: (newest-cni-921796)     <bootmenu enable='no'/>
	I0930 21:27:32.074555   80623 main.go:141] libmachine: (newest-cni-921796)   </os>
	I0930 21:27:32.074566   80623 main.go:141] libmachine: (newest-cni-921796)   <devices>
	I0930 21:27:32.074582   80623 main.go:141] libmachine: (newest-cni-921796)     <disk type='file' device='cdrom'>
	I0930 21:27:32.074599   80623 main.go:141] libmachine: (newest-cni-921796)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796/boot2docker.iso'/>
	I0930 21:27:32.074618   80623 main.go:141] libmachine: (newest-cni-921796)       <target dev='hdc' bus='scsi'/>
	I0930 21:27:32.074630   80623 main.go:141] libmachine: (newest-cni-921796)       <readonly/>
	I0930 21:27:32.074640   80623 main.go:141] libmachine: (newest-cni-921796)     </disk>
	I0930 21:27:32.074650   80623 main.go:141] libmachine: (newest-cni-921796)     <disk type='file' device='disk'>
	I0930 21:27:32.074662   80623 main.go:141] libmachine: (newest-cni-921796)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 21:27:32.074679   80623 main.go:141] libmachine: (newest-cni-921796)       <source file='/home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796/newest-cni-921796.rawdisk'/>
	I0930 21:27:32.074690   80623 main.go:141] libmachine: (newest-cni-921796)       <target dev='hda' bus='virtio'/>
	I0930 21:27:32.074699   80623 main.go:141] libmachine: (newest-cni-921796)     </disk>
	I0930 21:27:32.074709   80623 main.go:141] libmachine: (newest-cni-921796)     <interface type='network'>
	I0930 21:27:32.074744   80623 main.go:141] libmachine: (newest-cni-921796)       <source network='mk-newest-cni-921796'/>
	I0930 21:27:32.074773   80623 main.go:141] libmachine: (newest-cni-921796)       <model type='virtio'/>
	I0930 21:27:32.074786   80623 main.go:141] libmachine: (newest-cni-921796)     </interface>
	I0930 21:27:32.074795   80623 main.go:141] libmachine: (newest-cni-921796)     <interface type='network'>
	I0930 21:27:32.074804   80623 main.go:141] libmachine: (newest-cni-921796)       <source network='default'/>
	I0930 21:27:32.074812   80623 main.go:141] libmachine: (newest-cni-921796)       <model type='virtio'/>
	I0930 21:27:32.074822   80623 main.go:141] libmachine: (newest-cni-921796)     </interface>
	I0930 21:27:32.074846   80623 main.go:141] libmachine: (newest-cni-921796)     <serial type='pty'>
	I0930 21:27:32.074871   80623 main.go:141] libmachine: (newest-cni-921796)       <target port='0'/>
	I0930 21:27:32.074889   80623 main.go:141] libmachine: (newest-cni-921796)     </serial>
	I0930 21:27:32.074902   80623 main.go:141] libmachine: (newest-cni-921796)     <console type='pty'>
	I0930 21:27:32.074913   80623 main.go:141] libmachine: (newest-cni-921796)       <target type='serial' port='0'/>
	I0930 21:27:32.074924   80623 main.go:141] libmachine: (newest-cni-921796)     </console>
	I0930 21:27:32.074938   80623 main.go:141] libmachine: (newest-cni-921796)     <rng model='virtio'>
	I0930 21:27:32.074956   80623 main.go:141] libmachine: (newest-cni-921796)       <backend model='random'>/dev/random</backend>
	I0930 21:27:32.074974   80623 main.go:141] libmachine: (newest-cni-921796)     </rng>
	I0930 21:27:32.074984   80623 main.go:141] libmachine: (newest-cni-921796)     
	I0930 21:27:32.074990   80623 main.go:141] libmachine: (newest-cni-921796)     
	I0930 21:27:32.075005   80623 main.go:141] libmachine: (newest-cni-921796)   </devices>
	I0930 21:27:32.075017   80623 main.go:141] libmachine: (newest-cni-921796) </domain>
	I0930 21:27:32.075034   80623 main.go:141] libmachine: (newest-cni-921796) 
	I0930 21:27:32.079235   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:89:ab:51 in network default
	I0930 21:27:32.079857   80623 main.go:141] libmachine: (newest-cni-921796) Ensuring networks are active...
	I0930 21:27:32.079881   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:32.080685   80623 main.go:141] libmachine: (newest-cni-921796) Ensuring network default is active
	I0930 21:27:32.081034   80623 main.go:141] libmachine: (newest-cni-921796) Ensuring network mk-newest-cni-921796 is active
	I0930 21:27:32.081699   80623 main.go:141] libmachine: (newest-cni-921796) Getting domain xml...
	I0930 21:27:32.082700   80623 main.go:141] libmachine: (newest-cni-921796) Creating domain...
	I0930 21:27:33.343924   80623 main.go:141] libmachine: (newest-cni-921796) Waiting to get IP...
	I0930 21:27:33.344797   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:33.345235   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:33.345261   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:33.345210   80645 retry.go:31] will retry after 270.717441ms: waiting for machine to come up
	I0930 21:27:33.617755   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:33.618355   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:33.618382   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:33.618312   80645 retry.go:31] will retry after 235.316471ms: waiting for machine to come up
	I0930 21:27:33.854826   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:33.855294   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:33.855325   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:33.855255   80645 retry.go:31] will retry after 359.718709ms: waiting for machine to come up
	I0930 21:27:34.216720   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:34.217256   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:34.217283   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:34.217202   80645 retry.go:31] will retry after 582.255177ms: waiting for machine to come up
	I0930 21:27:34.800815   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:34.801269   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:34.801291   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:34.801230   80645 retry.go:31] will retry after 458.73829ms: waiting for machine to come up
	I0930 21:27:35.262059   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:35.262640   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:35.262666   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:35.262578   80645 retry.go:31] will retry after 905.772984ms: waiting for machine to come up
	I0930 21:27:36.169609   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:36.170082   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:36.170113   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:36.170011   80645 retry.go:31] will retry after 834.837932ms: waiting for machine to come up
	I0930 21:27:37.006310   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:37.006952   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:37.006983   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:37.006900   80645 retry.go:31] will retry after 1.134841031s: waiting for machine to come up
	I0930 21:27:38.143027   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:38.143415   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:38.143441   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:38.143370   80645 retry.go:31] will retry after 1.143128869s: waiting for machine to come up
	I0930 21:27:39.288674   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:39.289134   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:39.289170   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:39.289052   80645 retry.go:31] will retry after 1.846520249s: waiting for machine to come up
	I0930 21:27:41.137858   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:41.138292   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:41.138317   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:41.138248   80645 retry.go:31] will retry after 2.461466476s: waiting for machine to come up
	I0930 21:27:43.602060   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:43.602843   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:43.602863   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:43.602708   80645 retry.go:31] will retry after 2.670646737s: waiting for machine to come up
	I0930 21:27:46.275225   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:46.275729   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:46.275755   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:46.275700   80645 retry.go:31] will retry after 3.939009893s: waiting for machine to come up
	I0930 21:27:50.216585   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:50.217062   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:27:50.217087   80623 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:27:50.217017   80645 retry.go:31] will retry after 4.060985329s: waiting for machine to come up
	I0930 21:27:54.281626   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.282138   80623 main.go:141] libmachine: (newest-cni-921796) Found IP for machine: 192.168.72.30
	I0930 21:27:54.282159   80623 main.go:141] libmachine: (newest-cni-921796) Reserving static IP address...
	I0930 21:27:54.282174   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has current primary IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.282628   80623 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find host DHCP lease matching {name: "newest-cni-921796", mac: "52:54:00:c8:f4:16", ip: "192.168.72.30"} in network mk-newest-cni-921796
	I0930 21:27:54.364600   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Getting to WaitForSSH function...
	I0930 21:27:54.364623   80623 main.go:141] libmachine: (newest-cni-921796) Reserved static IP address: 192.168.72.30
	I0930 21:27:54.364635   80623 main.go:141] libmachine: (newest-cni-921796) Waiting for SSH to be available...
	I0930 21:27:54.367470   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.367907   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:54.367964   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.368274   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Using SSH client type: external
	I0930 21:27:54.368304   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796/id_rsa (-rw-------)
	I0930 21:27:54.368343   80623 main.go:141] libmachine: (newest-cni-921796) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:27:54.368353   80623 main.go:141] libmachine: (newest-cni-921796) DBG | About to run SSH command:
	I0930 21:27:54.368368   80623 main.go:141] libmachine: (newest-cni-921796) DBG | exit 0
	I0930 21:27:54.495269   80623 main.go:141] libmachine: (newest-cni-921796) DBG | SSH cmd err, output: <nil>: 
	I0930 21:27:54.495554   80623 main.go:141] libmachine: (newest-cni-921796) KVM machine creation complete!
	I0930 21:27:54.495854   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetConfigRaw
	I0930 21:27:54.496373   80623 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:27:54.496555   80623 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:27:54.496718   80623 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 21:27:54.496732   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetState
	I0930 21:27:54.498048   80623 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 21:27:54.498062   80623 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 21:27:54.498070   80623 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 21:27:54.498078   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHHostname
	I0930 21:27:54.500738   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.501094   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:54.501120   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.501253   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHPort
	I0930 21:27:54.501424   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:54.501555   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:54.501666   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHUsername
	I0930 21:27:54.501801   80623 main.go:141] libmachine: Using SSH client type: native
	I0930 21:27:54.502004   80623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0930 21:27:54.502016   80623 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 21:27:54.606956   80623 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:27:54.606984   80623 main.go:141] libmachine: Detecting the provisioner...
	I0930 21:27:54.606994   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHHostname
	I0930 21:27:54.610284   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.610616   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:54.610642   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.610808   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHPort
	I0930 21:27:54.610997   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:54.611132   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:54.611277   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHUsername
	I0930 21:27:54.611446   80623 main.go:141] libmachine: Using SSH client type: native
	I0930 21:27:54.611652   80623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0930 21:27:54.611665   80623 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 21:27:54.715983   80623 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 21:27:54.716062   80623 main.go:141] libmachine: found compatible host: buildroot
	I0930 21:27:54.716074   80623 main.go:141] libmachine: Provisioning with buildroot...
	I0930 21:27:54.716088   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetMachineName
	I0930 21:27:54.716340   80623 buildroot.go:166] provisioning hostname "newest-cni-921796"
	I0930 21:27:54.716371   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetMachineName
	I0930 21:27:54.716579   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHHostname
	I0930 21:27:54.719525   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.719943   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:54.719965   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.720169   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHPort
	I0930 21:27:54.720357   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:54.720524   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:54.720656   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHUsername
	I0930 21:27:54.720852   80623 main.go:141] libmachine: Using SSH client type: native
	I0930 21:27:54.721058   80623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0930 21:27:54.721072   80623 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-921796 && echo "newest-cni-921796" | sudo tee /etc/hostname
	I0930 21:27:54.845755   80623 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-921796
	
	I0930 21:27:54.845784   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHHostname
	I0930 21:27:54.848649   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.849029   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:54.849052   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.849235   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHPort
	I0930 21:27:54.849425   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:54.849584   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:54.849737   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHUsername
	I0930 21:27:54.849868   80623 main.go:141] libmachine: Using SSH client type: native
	I0930 21:27:54.850095   80623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0930 21:27:54.850121   80623 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-921796' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-921796/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-921796' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:27:54.964211   80623 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:27:54.964244   80623 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:27:54.964275   80623 buildroot.go:174] setting up certificates
	I0930 21:27:54.964286   80623 provision.go:84] configureAuth start
	I0930 21:27:54.964303   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetMachineName
	I0930 21:27:54.964704   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetIP
	I0930 21:27:54.967523   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.967935   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:54.967969   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.968068   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHHostname
	I0930 21:27:54.970433   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.970729   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:54.970763   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:54.970864   80623 provision.go:143] copyHostCerts
	I0930 21:27:54.970925   80623 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:27:54.970935   80623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:27:54.970999   80623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:27:54.971091   80623 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:27:54.971099   80623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:27:54.971122   80623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:27:54.971177   80623 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:27:54.971184   80623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:27:54.971203   80623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:27:54.971280   80623 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.newest-cni-921796 san=[127.0.0.1 192.168.72.30 localhost minikube newest-cni-921796]
	I0930 21:27:55.154234   80623 provision.go:177] copyRemoteCerts
	I0930 21:27:55.154293   80623 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:27:55.154316   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHHostname
	I0930 21:27:55.157001   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.157355   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:55.157407   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.157507   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHPort
	I0930 21:27:55.157708   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:55.157851   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHUsername
	I0930 21:27:55.157968   80623 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796/id_rsa Username:docker}
	I0930 21:27:55.241848   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:27:55.267338   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0930 21:27:55.293195   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:27:55.317601   80623 provision.go:87] duration metric: took 353.301018ms to configureAuth
	I0930 21:27:55.317631   80623 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:27:55.317827   80623 config.go:182] Loaded profile config "newest-cni-921796": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:27:55.317926   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHHostname
	I0930 21:27:55.320667   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.320984   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:55.321015   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.321205   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHPort
	I0930 21:27:55.321429   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:55.321595   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:55.321773   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHUsername
	I0930 21:27:55.321947   80623 main.go:141] libmachine: Using SSH client type: native
	I0930 21:27:55.322150   80623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0930 21:27:55.322172   80623 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:27:55.550998   80623 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:27:55.551026   80623 main.go:141] libmachine: Checking connection to Docker...
	I0930 21:27:55.551051   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetURL
	I0930 21:27:55.552597   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Using libvirt version 6000000
	I0930 21:27:55.555024   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.555403   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:55.555431   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.555640   80623 main.go:141] libmachine: Docker is up and running!
	I0930 21:27:55.555656   80623 main.go:141] libmachine: Reticulating splines...
	I0930 21:27:55.555662   80623 client.go:171] duration metric: took 24.343634629s to LocalClient.Create
	I0930 21:27:55.555680   80623 start.go:167] duration metric: took 24.343692022s to libmachine.API.Create "newest-cni-921796"
	I0930 21:27:55.555688   80623 start.go:293] postStartSetup for "newest-cni-921796" (driver="kvm2")
	I0930 21:27:55.555697   80623 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:27:55.555712   80623 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:27:55.555924   80623 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:27:55.555957   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHHostname
	I0930 21:27:55.558104   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.558398   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:55.558418   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.558575   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHPort
	I0930 21:27:55.558765   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:55.558885   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHUsername
	I0930 21:27:55.559008   80623 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796/id_rsa Username:docker}
	I0930 21:27:55.642009   80623 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:27:55.646278   80623 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:27:55.646304   80623 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:27:55.646378   80623 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:27:55.646494   80623 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:27:55.646620   80623 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:27:55.656173   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:27:55.682433   80623 start.go:296] duration metric: took 126.710438ms for postStartSetup
	I0930 21:27:55.682493   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetConfigRaw
	I0930 21:27:55.683096   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetIP
	I0930 21:27:55.685987   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.686279   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:55.686310   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.686511   80623 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/config.json ...
	I0930 21:27:55.686699   80623 start.go:128] duration metric: took 24.494535309s to createHost
	I0930 21:27:55.686722   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHHostname
	I0930 21:27:55.689174   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.689525   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:55.689552   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.689834   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHPort
	I0930 21:27:55.690017   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:55.690203   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:55.690336   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHUsername
	I0930 21:27:55.690472   80623 main.go:141] libmachine: Using SSH client type: native
	I0930 21:27:55.690633   80623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0930 21:27:55.690643   80623 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:27:55.796020   80623 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727731675.755183676
	
	I0930 21:27:55.796042   80623 fix.go:216] guest clock: 1727731675.755183676
	I0930 21:27:55.796048   80623 fix.go:229] Guest: 2024-09-30 21:27:55.755183676 +0000 UTC Remote: 2024-09-30 21:27:55.686711169 +0000 UTC m=+24.609426261 (delta=68.472507ms)
	I0930 21:27:55.796065   80623 fix.go:200] guest clock delta is within tolerance: 68.472507ms
	I0930 21:27:55.796069   80623 start.go:83] releasing machines lock for "newest-cni-921796", held for 24.604018788s
	I0930 21:27:55.796085   80623 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:27:55.796351   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetIP
	I0930 21:27:55.799179   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.799585   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:55.799612   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.799772   80623 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:27:55.800209   80623 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:27:55.800386   80623 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:27:55.800477   80623 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:27:55.800518   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHHostname
	I0930 21:27:55.800623   80623 ssh_runner.go:195] Run: cat /version.json
	I0930 21:27:55.800646   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHHostname
	I0930 21:27:55.803470   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.803806   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:55.803836   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.803855   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.803937   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHPort
	I0930 21:27:55.804142   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:55.804299   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:55.804324   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:55.804350   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHUsername
	I0930 21:27:55.804436   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHPort
	I0930 21:27:55.804515   80623 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796/id_rsa Username:docker}
	I0930 21:27:55.804606   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:27:55.804766   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHUsername
	I0930 21:27:55.804939   80623 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796/id_rsa Username:docker}
	I0930 21:27:55.931061   80623 ssh_runner.go:195] Run: systemctl --version
	I0930 21:27:55.936985   80623 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:27:56.099601   80623 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:27:56.105109   80623 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:27:56.105188   80623 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:27:56.121253   80623 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:27:56.121282   80623 start.go:495] detecting cgroup driver to use...
	I0930 21:27:56.121347   80623 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:27:56.138139   80623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:27:56.152024   80623 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:27:56.152094   80623 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:27:56.166463   80623 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:27:56.180136   80623 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:27:56.295497   80623 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:27:56.451119   80623 docker.go:233] disabling docker service ...
	I0930 21:27:56.451181   80623 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:27:56.471625   80623 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:27:56.485558   80623 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:27:56.629445   80623 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:27:56.773460   80623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:27:56.786438   80623 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:27:56.805580   80623 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:27:56.805665   80623 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:27:56.816517   80623 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:27:56.816592   80623 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:27:56.827842   80623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:27:56.838908   80623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:27:56.851640   80623 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:27:56.864651   80623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:27:56.876770   80623 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:27:56.895715   80623 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:27:56.906289   80623 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:27:56.915872   80623 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:27:56.915941   80623 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:27:56.929360   80623 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:27:56.939368   80623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:27:57.079948   80623 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:27:57.176429   80623 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:27:57.176514   80623 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:27:57.181334   80623 start.go:563] Will wait 60s for crictl version
	I0930 21:27:57.181404   80623 ssh_runner.go:195] Run: which crictl
	I0930 21:27:57.185204   80623 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:27:57.230716   80623 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:27:57.230815   80623 ssh_runner.go:195] Run: crio --version
	I0930 21:27:57.258062   80623 ssh_runner.go:195] Run: crio --version
	I0930 21:27:57.290405   80623 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:27:57.291590   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetIP
	I0930 21:27:57.294482   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:57.294829   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:27:57.294869   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:27:57.295062   80623 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0930 21:27:57.299686   80623 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:27:57.313447   80623 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0930 21:27:57.314617   80623 kubeadm.go:883] updating cluster {Name:newest-cni-921796 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-921796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:27:57.314734   80623 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:27:57.314802   80623 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:27:57.347665   80623 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:27:57.347726   80623 ssh_runner.go:195] Run: which lz4
	I0930 21:27:57.351242   80623 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:27:57.354883   80623 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:27:57.354909   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 21:27:58.655017   80623 crio.go:462] duration metric: took 1.303799554s to copy over tarball
	I0930 21:27:58.655098   80623 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:28:00.704396   80623 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.049257376s)
	I0930 21:28:00.704425   80623 crio.go:469] duration metric: took 2.049381368s to extract the tarball
	I0930 21:28:00.704435   80623 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:28:00.742807   80623 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:28:00.791592   80623 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 21:28:00.791616   80623 cache_images.go:84] Images are preloaded, skipping loading
	I0930 21:28:00.791623   80623 kubeadm.go:934] updating node { 192.168.72.30 8443 v1.31.1 crio true true} ...
	I0930 21:28:00.791724   80623 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-921796 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-921796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:28:00.791808   80623 ssh_runner.go:195] Run: crio config
	I0930 21:28:00.842797   80623 cni.go:84] Creating CNI manager for ""
	I0930 21:28:00.842826   80623 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:28:00.842839   80623 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0930 21:28:00.842870   80623 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-921796 NodeName:newest-cni-921796 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:28:00.843022   80623 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-921796"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:28:00.843081   80623 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:28:00.853652   80623 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:28:00.853725   80623 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:28:00.863673   80623 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0930 21:28:00.882753   80623 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:28:00.900824   80623 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0930 21:28:00.917663   80623 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0930 21:28:00.921501   80623 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:28:00.934434   80623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:28:01.056105   80623 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:28:01.074010   80623 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796 for IP: 192.168.72.30
	I0930 21:28:01.074038   80623 certs.go:194] generating shared ca certs ...
	I0930 21:28:01.074059   80623 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:28:01.074267   80623 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:28:01.074326   80623 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:28:01.074340   80623 certs.go:256] generating profile certs ...
	I0930 21:28:01.074426   80623 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/client.key
	I0930 21:28:01.074456   80623 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/client.crt with IP's: []
	I0930 21:28:01.222966   80623 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/client.crt ...
	I0930 21:28:01.222995   80623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/client.crt: {Name:mk993e14e375ea04dc965f58435e8e58b25fa6aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:28:01.223174   80623 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/client.key ...
	I0930 21:28:01.223185   80623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/client.key: {Name:mk4b1e48f56cdb43a1a2066058ca7292885daaa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:28:01.223262   80623 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/apiserver.key.48d59c2c
	I0930 21:28:01.223277   80623 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/apiserver.crt.48d59c2c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.30]
	I0930 21:28:01.339119   80623 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/apiserver.crt.48d59c2c ...
	I0930 21:28:01.339159   80623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/apiserver.crt.48d59c2c: {Name:mk08689d81555bb72b5f482a247f9e205f61b83d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:28:01.339336   80623 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/apiserver.key.48d59c2c ...
	I0930 21:28:01.339352   80623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/apiserver.key.48d59c2c: {Name:mk6be76c7c9dcad7460a98f96b062c3ca1234897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:28:01.339463   80623 certs.go:381] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/apiserver.crt.48d59c2c -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/apiserver.crt
	I0930 21:28:01.339598   80623 certs.go:385] copying /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/apiserver.key.48d59c2c -> /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/apiserver.key
	I0930 21:28:01.339689   80623 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/proxy-client.key
	I0930 21:28:01.339708   80623 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/proxy-client.crt with IP's: []
	I0930 21:28:01.456046   80623 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/proxy-client.crt ...
	I0930 21:28:01.456075   80623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/proxy-client.crt: {Name:mkf920eb1791c51eee817e234aed6d33104fd50e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:28:01.456227   80623 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/proxy-client.key ...
	I0930 21:28:01.456239   80623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/proxy-client.key: {Name:mk76f0d02a34c67ace79fba595ab9926abe2d1e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:28:01.456397   80623 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:28:01.456430   80623 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:28:01.456441   80623 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:28:01.456465   80623 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:28:01.456486   80623 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:28:01.456508   80623 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:28:01.456543   80623 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:28:01.457131   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:28:01.483428   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:28:01.507247   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:28:01.532717   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:28:01.555907   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 21:28:01.581226   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:28:01.605043   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:28:01.628879   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 21:28:01.652615   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:28:01.676492   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:28:01.702030   80623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:28:01.726806   80623 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:28:01.746376   80623 ssh_runner.go:195] Run: openssl version
	I0930 21:28:01.753256   80623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:28:01.765191   80623 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:28:01.769960   80623 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:28:01.770030   80623 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:28:01.776131   80623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:28:01.787065   80623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:28:01.797967   80623 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:28:01.802356   80623 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:28:01.802428   80623 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:28:01.808084   80623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:28:01.819494   80623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:28:01.830832   80623 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:28:01.836292   80623 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:28:01.836367   80623 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:28:01.842097   80623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:28:01.856478   80623 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:28:01.863293   80623 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 21:28:01.863355   80623 kubeadm.go:392] StartCluster: {Name:newest-cni-921796 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-921796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:28:01.863445   80623 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:28:01.863502   80623 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:28:01.907463   80623 cri.go:89] found id: ""
	I0930 21:28:01.907557   80623 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:28:01.917961   80623 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:28:01.934368   80623 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:28:01.944789   80623 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:28:01.944848   80623 kubeadm.go:157] found existing configuration files:
	
	I0930 21:28:01.944916   80623 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:28:01.954820   80623 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:28:01.954877   80623 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:28:01.964816   80623 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:28:01.974417   80623 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:28:01.974503   80623 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:28:01.984075   80623 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:28:01.993515   80623 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:28:01.993571   80623 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:28:02.003521   80623 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:28:02.013512   80623 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:28:02.013616   80623 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:28:02.024857   80623 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:28:02.127375   80623 kubeadm.go:310] W0930 21:28:02.085769     824 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:28:02.128383   80623 kubeadm.go:310] W0930 21:28:02.086969     824 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:28:02.236397   80623 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:28:11.645548   80623 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 21:28:11.645618   80623 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:28:11.645708   80623 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:28:11.645792   80623 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:28:11.645892   80623 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 21:28:11.645991   80623 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:28:11.647495   80623 out.go:235]   - Generating certificates and keys ...
	I0930 21:28:11.647610   80623 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:28:11.647712   80623 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:28:11.647808   80623 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 21:28:11.647884   80623 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 21:28:11.647966   80623 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 21:28:11.648014   80623 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 21:28:11.648061   80623 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 21:28:11.648161   80623 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-921796] and IPs [192.168.72.30 127.0.0.1 ::1]
	I0930 21:28:11.648223   80623 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 21:28:11.648410   80623 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-921796] and IPs [192.168.72.30 127.0.0.1 ::1]
	I0930 21:28:11.648512   80623 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 21:28:11.648592   80623 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 21:28:11.648651   80623 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 21:28:11.648735   80623 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:28:11.648812   80623 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:28:11.648902   80623 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 21:28:11.648977   80623 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:28:11.649067   80623 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:28:11.649151   80623 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:28:11.649262   80623 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:28:11.649358   80623 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:28:11.650798   80623 out.go:235]   - Booting up control plane ...
	I0930 21:28:11.650909   80623 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:28:11.650982   80623 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:28:11.651057   80623 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:28:11.651153   80623 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:28:11.651223   80623 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:28:11.651264   80623 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:28:11.651390   80623 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 21:28:11.651488   80623 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 21:28:11.651590   80623 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.60734ms
	I0930 21:28:11.651679   80623 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 21:28:11.651733   80623 kubeadm.go:310] [api-check] The API server is healthy after 5.002069528s
	I0930 21:28:11.651872   80623 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 21:28:11.652039   80623 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 21:28:11.652121   80623 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 21:28:11.652329   80623 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-921796 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 21:28:11.652406   80623 kubeadm.go:310] [bootstrap-token] Using token: cfrns4.fwntudw4pheayt6k
	I0930 21:28:11.653822   80623 out.go:235]   - Configuring RBAC rules ...
	I0930 21:28:11.653935   80623 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 21:28:11.654017   80623 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 21:28:11.654139   80623 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 21:28:11.654238   80623 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 21:28:11.654386   80623 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 21:28:11.654472   80623 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 21:28:11.654573   80623 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 21:28:11.654622   80623 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 21:28:11.654683   80623 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 21:28:11.654690   80623 kubeadm.go:310] 
	I0930 21:28:11.654768   80623 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 21:28:11.654776   80623 kubeadm.go:310] 
	I0930 21:28:11.654847   80623 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 21:28:11.654854   80623 kubeadm.go:310] 
	I0930 21:28:11.654888   80623 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 21:28:11.654975   80623 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 21:28:11.655051   80623 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 21:28:11.655060   80623 kubeadm.go:310] 
	I0930 21:28:11.655109   80623 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 21:28:11.655115   80623 kubeadm.go:310] 
	I0930 21:28:11.655155   80623 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 21:28:11.655161   80623 kubeadm.go:310] 
	I0930 21:28:11.655204   80623 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 21:28:11.655270   80623 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 21:28:11.655343   80623 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 21:28:11.655362   80623 kubeadm.go:310] 
	I0930 21:28:11.655446   80623 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 21:28:11.655569   80623 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 21:28:11.655585   80623 kubeadm.go:310] 
	I0930 21:28:11.655685   80623 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cfrns4.fwntudw4pheayt6k \
	I0930 21:28:11.655782   80623 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 21:28:11.655819   80623 kubeadm.go:310] 	--control-plane 
	I0930 21:28:11.655828   80623 kubeadm.go:310] 
	I0930 21:28:11.655929   80623 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 21:28:11.655943   80623 kubeadm.go:310] 
	I0930 21:28:11.656058   80623 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cfrns4.fwntudw4pheayt6k \
	I0930 21:28:11.656164   80623 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 21:28:11.656190   80623 cni.go:84] Creating CNI manager for ""
	I0930 21:28:11.656199   80623 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:28:11.657676   80623 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:28:11.658763   80623 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:28:11.671707   80623 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:28:11.693114   80623 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:28:11.693194   80623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:28:11.693216   80623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-921796 minikube.k8s.io/updated_at=2024_09_30T21_28_11_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=newest-cni-921796 minikube.k8s.io/primary=true
	I0930 21:28:11.722398   80623 ops.go:34] apiserver oom_adj: -16
	I0930 21:28:11.903248   80623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:28:12.404065   80623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:28:12.903716   80623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:28:13.404022   80623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:28:13.903343   80623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:28:14.404264   80623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:28:14.904074   80623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:28:15.403742   80623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:28:15.903691   80623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:28:16.024145   80623 kubeadm.go:1113] duration metric: took 4.331012752s to wait for elevateKubeSystemPrivileges
	I0930 21:28:16.024194   80623 kubeadm.go:394] duration metric: took 14.160840721s to StartCluster
	I0930 21:28:16.024219   80623 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:28:16.024312   80623 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:28:16.027241   80623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:28:16.027610   80623 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 21:28:16.027611   80623 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:28:16.027694   80623 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:28:16.027809   80623 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-921796"
	I0930 21:28:16.027815   80623 config.go:182] Loaded profile config "newest-cni-921796": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:28:16.027837   80623 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-921796"
	I0930 21:28:16.027871   80623 host.go:66] Checking if "newest-cni-921796" exists ...
	I0930 21:28:16.027869   80623 addons.go:69] Setting default-storageclass=true in profile "newest-cni-921796"
	I0930 21:28:16.027922   80623 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-921796"
	I0930 21:28:16.028348   80623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:28:16.028395   80623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:28:16.028429   80623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:28:16.028476   80623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:28:16.029510   80623 out.go:177] * Verifying Kubernetes components...
	I0930 21:28:16.031072   80623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:28:16.045140   80623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41233
	I0930 21:28:16.045671   80623 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:28:16.046283   80623 main.go:141] libmachine: Using API Version  1
	I0930 21:28:16.046332   80623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:28:16.046760   80623 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:28:16.047308   80623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:28:16.047341   80623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:28:16.052421   80623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40619
	I0930 21:28:16.052915   80623 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:28:16.053592   80623 main.go:141] libmachine: Using API Version  1
	I0930 21:28:16.053619   80623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:28:16.053978   80623 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:28:16.054208   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetState
	I0930 21:28:16.058527   80623 addons.go:234] Setting addon default-storageclass=true in "newest-cni-921796"
	I0930 21:28:16.058562   80623 host.go:66] Checking if "newest-cni-921796" exists ...
	I0930 21:28:16.058859   80623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:28:16.058875   80623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:28:16.065622   80623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I0930 21:28:16.066070   80623 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:28:16.066628   80623 main.go:141] libmachine: Using API Version  1
	I0930 21:28:16.066644   80623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:28:16.066916   80623 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:28:16.067121   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetState
	I0930 21:28:16.068816   80623 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:28:16.070850   80623 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:28:16.072155   80623 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:28:16.072175   80623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:28:16.072196   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHHostname
	I0930 21:28:16.075735   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:16.076265   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:28:16.076296   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:16.076569   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHPort
	I0930 21:28:16.076763   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:28:16.076932   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHUsername
	I0930 21:28:16.077086   80623 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796/id_rsa Username:docker}
	I0930 21:28:16.077673   80623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0930 21:28:16.078048   80623 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:28:16.078540   80623 main.go:141] libmachine: Using API Version  1
	I0930 21:28:16.078552   80623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:28:16.078855   80623 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:28:16.079296   80623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:28:16.079332   80623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:28:16.094499   80623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36997
	I0930 21:28:16.094966   80623 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:28:16.095500   80623 main.go:141] libmachine: Using API Version  1
	I0930 21:28:16.095513   80623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:28:16.096315   80623 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:28:16.096519   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetState
	I0930 21:28:16.098280   80623 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:28:16.098510   80623 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:28:16.098524   80623 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:28:16.098540   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHHostname
	I0930 21:28:16.101213   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:16.101712   80623 main.go:141] libmachine: (newest-cni-921796) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f4:16", ip: ""} in network mk-newest-cni-921796: {Iface:virbr4 ExpiryTime:2024-09-30 22:27:45 +0000 UTC Type:0 Mac:52:54:00:c8:f4:16 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:newest-cni-921796 Clientid:01:52:54:00:c8:f4:16}
	I0930 21:28:16.101801   80623 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined IP address 192.168.72.30 and MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:16.102007   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHPort
	I0930 21:28:16.102176   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHKeyPath
	I0930 21:28:16.102338   80623 main.go:141] libmachine: (newest-cni-921796) Calling .GetSSHUsername
	I0930 21:28:16.102465   80623 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/newest-cni-921796/id_rsa Username:docker}
	I0930 21:28:16.405675   80623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:28:16.407014   80623 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:28:16.407090   80623 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 21:28:16.457569   80623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:28:16.787595   80623 main.go:141] libmachine: Making call to close driver server
	I0930 21:28:16.787627   80623 main.go:141] libmachine: (newest-cni-921796) Calling .Close
	I0930 21:28:16.787921   80623 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:28:16.787936   80623 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:28:16.787944   80623 main.go:141] libmachine: Making call to close driver server
	I0930 21:28:16.787951   80623 main.go:141] libmachine: (newest-cni-921796) Calling .Close
	I0930 21:28:16.787951   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Closing plugin on server side
	I0930 21:28:16.788178   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Closing plugin on server side
	I0930 21:28:16.788225   80623 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:28:16.788250   80623 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:28:16.789557   80623 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:28:16.789633   80623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:28:16.808199   80623 main.go:141] libmachine: Making call to close driver server
	I0930 21:28:16.808229   80623 main.go:141] libmachine: (newest-cni-921796) Calling .Close
	I0930 21:28:16.808546   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Closing plugin on server side
	I0930 21:28:16.808588   80623 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:28:16.808606   80623 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:28:17.190896   80623 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0930 21:28:17.672921   80623 api_server.go:72] duration metric: took 1.645271952s to wait for apiserver process to appear ...
	I0930 21:28:17.672949   80623 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:28:17.672988   80623 api_server.go:253] Checking apiserver healthz at https://192.168.72.30:8443/healthz ...
	I0930 21:28:17.673017   80623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.21539436s)
	I0930 21:28:17.673053   80623 main.go:141] libmachine: Making call to close driver server
	I0930 21:28:17.673071   80623 main.go:141] libmachine: (newest-cni-921796) Calling .Close
	I0930 21:28:17.673372   80623 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:28:17.673391   80623 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:28:17.673400   80623 main.go:141] libmachine: Making call to close driver server
	I0930 21:28:17.673407   80623 main.go:141] libmachine: (newest-cni-921796) Calling .Close
	I0930 21:28:17.673674   80623 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:28:17.673693   80623 main.go:141] libmachine: (newest-cni-921796) DBG | Closing plugin on server side
	I0930 21:28:17.673700   80623 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:28:17.676162   80623 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0930 21:28:17.677245   80623 addons.go:510] duration metric: took 1.649553627s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0930 21:28:17.680346   80623 api_server.go:279] https://192.168.72.30:8443/healthz returned 200:
	ok
	I0930 21:28:17.681865   80623 api_server.go:141] control plane version: v1.31.1
	I0930 21:28:17.681886   80623 api_server.go:131] duration metric: took 8.929751ms to wait for apiserver health ...
	I0930 21:28:17.681897   80623 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:28:17.696984   80623 system_pods.go:59] 8 kube-system pods found
	I0930 21:28:17.697026   80623 system_pods.go:61] "coredns-7c65d6cfc9-v7xjx" [7a9c236a-54dd-48a7-9a45-837cd08756f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:28:17.697038   80623 system_pods.go:61] "coredns-7c65d6cfc9-vvw5z" [d6ef5e33-3a70-4a76-b999-e409b9ff1d1f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:28:17.697047   80623 system_pods.go:61] "etcd-newest-cni-921796" [749b26b7-b5ee-4e26-8a38-92893d70e185] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:28:17.697054   80623 system_pods.go:61] "kube-apiserver-newest-cni-921796" [1f65525e-2867-4f76-8c42-6cf509a1718a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:28:17.697061   80623 system_pods.go:61] "kube-controller-manager-newest-cni-921796" [19136035-d15a-4fc6-87e6-a106b76f4764] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:28:17.697066   80623 system_pods.go:61] "kube-proxy-8zmnh" [0121eff7-7969-4862-aa02-675a45674507] Running
	I0930 21:28:17.697073   80623 system_pods.go:61] "kube-scheduler-newest-cni-921796" [30580fc7-0b85-431f-80f5-b2951dcc2544] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:28:17.697076   80623 system_pods.go:61] "storage-provisioner" [2ab50722-d2ab-457e-809f-c8517d3888b8] Pending
	I0930 21:28:17.697082   80623 system_pods.go:74] duration metric: took 15.179564ms to wait for pod list to return data ...
	I0930 21:28:17.697089   80623 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:28:17.705071   80623 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-921796" context rescaled to 1 replicas
	I0930 21:28:17.706333   80623 default_sa.go:45] found service account: "default"
	I0930 21:28:17.706352   80623 default_sa.go:55] duration metric: took 9.258045ms for default service account to be created ...
	I0930 21:28:17.706362   80623 kubeadm.go:582] duration metric: took 1.678719779s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0930 21:28:17.706375   80623 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:28:17.717472   80623 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:28:17.717498   80623 node_conditions.go:123] node cpu capacity is 2
	I0930 21:28:17.717508   80623 node_conditions.go:105] duration metric: took 11.128938ms to run NodePressure ...
	I0930 21:28:17.717518   80623 start.go:241] waiting for startup goroutines ...
	I0930 21:28:17.717525   80623 start.go:246] waiting for cluster config update ...
	I0930 21:28:17.717535   80623 start.go:255] writing updated cluster config ...
	I0930 21:28:17.717753   80623 ssh_runner.go:195] Run: rm -f paused
	I0930 21:28:17.780338   80623 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:28:17.782554   80623 out.go:177] * Done! kubectl is now configured to use "newest-cni-921796" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.417345775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731702417323545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15eb6fdf-4f9f-4c2b-b81d-46b6a26aff89 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.417946784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d1ae60f-9d8b-47f1-9f93-c4b45265e231 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.418085099Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d1ae60f-9d8b-47f1-9f93-c4b45265e231 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.418380067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730504093374150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4511e8755902041ec728c39a53350645fb5e31ed150b5935b3ee003b41f711,PodSandboxId:8147142912a2d88a8228bd307f69e3a6540c21d00f4f9618062853f36290d473,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730484441413665,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f0eedc3-2026-4ba3-ac8e-784be7e51dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7,PodSandboxId:33a1a02b5819f89b582185170a53eab5bde7dfdf3a0cb0ea354e7b1a74d9111f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730480935801356,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jg8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46ba2867-485a-4b67-af4b-4de2c607d172,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730473266262316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f,PodSandboxId:54da58cb4856ec108353c10a5a6f612ee192711d6459e265c06fab8a90da9dba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730473268426087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klcv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 133bcd7f-667d-4969-b063-d33e2c8eed
0f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122,PodSandboxId:8fdadebc4632316c6851d6142b4a2951f4e762607a03802501113b27fb76d466,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730468494614060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909537799d377a7b5a56a4a5d684c97d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf,PodSandboxId:a96ee404058b8e9e5bb32c16fe21830aad9d481ffddd18dd8e660f7b77794911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730468514319531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4bbd39434baedeb326d3b6c5f0f
b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c,PodSandboxId:2d56c1daebf60b9201cbc515f8e1565fbdfc630ee552a17c531c57a3b85ad1d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730468486940473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf702a6b765256da0a8cd88a48f902d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c,PodSandboxId:cee67c278b3f721f7d21238705e692223dd134b5ab39c248fc1ee94b239f3c89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730468447781115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9710f8be49235e7e38d661128fa5cb3a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d1ae60f-9d8b-47f1-9f93-c4b45265e231 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.453923340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02eda0b1-006c-451b-b499-979ffa8ba869 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.454096046Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02eda0b1-006c-451b-b499-979ffa8ba869 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.455165657Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d70d86fb-8138-46c1-879e-d9b9fd5def90 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.455604290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731702455582234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d70d86fb-8138-46c1-879e-d9b9fd5def90 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.456293869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec2d73c0-de46-4f71-b892-835b22fba8f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.456346273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec2d73c0-de46-4f71-b892-835b22fba8f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.456543953Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730504093374150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4511e8755902041ec728c39a53350645fb5e31ed150b5935b3ee003b41f711,PodSandboxId:8147142912a2d88a8228bd307f69e3a6540c21d00f4f9618062853f36290d473,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730484441413665,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f0eedc3-2026-4ba3-ac8e-784be7e51dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7,PodSandboxId:33a1a02b5819f89b582185170a53eab5bde7dfdf3a0cb0ea354e7b1a74d9111f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730480935801356,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jg8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46ba2867-485a-4b67-af4b-4de2c607d172,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730473266262316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f,PodSandboxId:54da58cb4856ec108353c10a5a6f612ee192711d6459e265c06fab8a90da9dba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730473268426087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klcv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 133bcd7f-667d-4969-b063-d33e2c8eed
0f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122,PodSandboxId:8fdadebc4632316c6851d6142b4a2951f4e762607a03802501113b27fb76d466,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730468494614060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909537799d377a7b5a56a4a5d684c97d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf,PodSandboxId:a96ee404058b8e9e5bb32c16fe21830aad9d481ffddd18dd8e660f7b77794911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730468514319531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4bbd39434baedeb326d3b6c5f0f
b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c,PodSandboxId:2d56c1daebf60b9201cbc515f8e1565fbdfc630ee552a17c531c57a3b85ad1d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730468486940473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf702a6b765256da0a8cd88a48f902d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c,PodSandboxId:cee67c278b3f721f7d21238705e692223dd134b5ab39c248fc1ee94b239f3c89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730468447781115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9710f8be49235e7e38d661128fa5cb3a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec2d73c0-de46-4f71-b892-835b22fba8f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.497561468Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e36a2736-f70b-49b3-a6ac-1629b07217ef name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.497676503Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e36a2736-f70b-49b3-a6ac-1629b07217ef name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.499238614Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2fba090-ab51-4ea4-85ad-877da6e49df9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.499773112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731702499739874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2fba090-ab51-4ea4-85ad-877da6e49df9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.500495898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14bf7d77-4a60-425c-af42-aead43a8a1c7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.500585146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14bf7d77-4a60-425c-af42-aead43a8a1c7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.500859974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730504093374150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4511e8755902041ec728c39a53350645fb5e31ed150b5935b3ee003b41f711,PodSandboxId:8147142912a2d88a8228bd307f69e3a6540c21d00f4f9618062853f36290d473,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730484441413665,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f0eedc3-2026-4ba3-ac8e-784be7e51dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7,PodSandboxId:33a1a02b5819f89b582185170a53eab5bde7dfdf3a0cb0ea354e7b1a74d9111f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730480935801356,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jg8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46ba2867-485a-4b67-af4b-4de2c607d172,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730473266262316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f,PodSandboxId:54da58cb4856ec108353c10a5a6f612ee192711d6459e265c06fab8a90da9dba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730473268426087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klcv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 133bcd7f-667d-4969-b063-d33e2c8eed
0f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122,PodSandboxId:8fdadebc4632316c6851d6142b4a2951f4e762607a03802501113b27fb76d466,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730468494614060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909537799d377a7b5a56a4a5d684c97d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf,PodSandboxId:a96ee404058b8e9e5bb32c16fe21830aad9d481ffddd18dd8e660f7b77794911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730468514319531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4bbd39434baedeb326d3b6c5f0f
b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c,PodSandboxId:2d56c1daebf60b9201cbc515f8e1565fbdfc630ee552a17c531c57a3b85ad1d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730468486940473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf702a6b765256da0a8cd88a48f902d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c,PodSandboxId:cee67c278b3f721f7d21238705e692223dd134b5ab39c248fc1ee94b239f3c89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730468447781115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9710f8be49235e7e38d661128fa5cb3a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14bf7d77-4a60-425c-af42-aead43a8a1c7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.546517474Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81affc31-a320-41f6-8a2f-8f561583fd26 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.546643739Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81affc31-a320-41f6-8a2f-8f561583fd26 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.548186288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e990419c-53cf-4eae-bd88-08021774d212 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.548734414Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731702548701651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e990419c-53cf-4eae-bd88-08021774d212 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.549515827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3dac37cb-6de9-4691-88cd-386be14a6ce5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.549607796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3dac37cb-6de9-4691-88cd-386be14a6ce5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:22 no-preload-997816 crio[707]: time="2024-09-30 21:28:22.549875009Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730504093374150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4511e8755902041ec728c39a53350645fb5e31ed150b5935b3ee003b41f711,PodSandboxId:8147142912a2d88a8228bd307f69e3a6540c21d00f4f9618062853f36290d473,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730484441413665,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f0eedc3-2026-4ba3-ac8e-784be7e51dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7,PodSandboxId:33a1a02b5819f89b582185170a53eab5bde7dfdf3a0cb0ea354e7b1a74d9111f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730480935801356,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jg8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46ba2867-485a-4b67-af4b-4de2c607d172,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e,PodSandboxId:6aa9b9bcc891891defe82eace573c379d96a428b175db1a928e9b815bb1b0773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730473266262316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1617edf-b831-48d3-9002-279b64f6389c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f,PodSandboxId:54da58cb4856ec108353c10a5a6f612ee192711d6459e265c06fab8a90da9dba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730473268426087,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klcv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 133bcd7f-667d-4969-b063-d33e2c8eed
0f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122,PodSandboxId:8fdadebc4632316c6851d6142b4a2951f4e762607a03802501113b27fb76d466,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730468494614060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909537799d377a7b5a56a4a5d684c97d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf,PodSandboxId:a96ee404058b8e9e5bb32c16fe21830aad9d481ffddd18dd8e660f7b77794911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730468514319531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4bbd39434baedeb326d3b6c5f0f
b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c,PodSandboxId:2d56c1daebf60b9201cbc515f8e1565fbdfc630ee552a17c531c57a3b85ad1d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730468486940473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf702a6b765256da0a8cd88a48f902d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c,PodSandboxId:cee67c278b3f721f7d21238705e692223dd134b5ab39c248fc1ee94b239f3c89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730468447781115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-997816,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9710f8be49235e7e38d661128fa5cb3a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3dac37cb-6de9-4691-88cd-386be14a6ce5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6dcf5ceb365ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       3                   6aa9b9bcc8918       storage-provisioner
	3e4511e875590       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   8147142912a2d       busybox
	d730f13030b2a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   33a1a02b5819f       coredns-7c65d6cfc9-jg8ph
	a5ce5450390e9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      20 minutes ago      Running             kube-proxy                1                   54da58cb4856e       kube-proxy-klcv8
	298410b231e99       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       2                   6aa9b9bcc8918       storage-provisioner
	1970803994e16       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      20 minutes ago      Running             kube-controller-manager   1                   a96ee404058b8       kube-controller-manager-no-preload-997816
	249f183de7189       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      20 minutes ago      Running             kube-apiserver            1                   8fdadebc46323       kube-apiserver-no-preload-997816
	e7334f6f13787       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   2d56c1daebf60       etcd-no-preload-997816
	438729352d121       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      20 minutes ago      Running             kube-scheduler            1                   cee67c278b3f7       kube-scheduler-no-preload-997816
	
	
	==> coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53025 - 44760 "HINFO IN 5467919944529872735.4377248471549316289. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012756936s
	
	
	==> describe nodes <==
	Name:               no-preload-997816
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-997816
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=no-preload-997816
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T20_59_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:59:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-997816
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 21:28:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 21:23:40 +0000   Mon, 30 Sep 2024 20:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 21:23:40 +0000   Mon, 30 Sep 2024 20:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 21:23:40 +0000   Mon, 30 Sep 2024 20:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 21:23:40 +0000   Mon, 30 Sep 2024 21:08:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.93
	  Hostname:    no-preload-997816
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f572d23b1fe74d57b0f24d55888a67b9
	  System UUID:                f572d23b-1fe7-4d57-b0f2-4d55888a67b9
	  Boot ID:                    ebbcf1ad-afb4-49b6-ac01-3dbca546db82
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-jg8ph                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-997816                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-no-preload-997816             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-no-preload-997816    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-klcv8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-997816             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-c2wpn              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     29m                kubelet          Node no-preload-997816 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node no-preload-997816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node no-preload-997816 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                29m                kubelet          Node no-preload-997816 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-997816 event: Registered Node no-preload-997816 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node no-preload-997816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node no-preload-997816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node no-preload-997816 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-997816 event: Registered Node no-preload-997816 in Controller
	
	
	==> dmesg <==
	[Sep30 21:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051012] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036995] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.763788] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.941108] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.543005] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.202530] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.057053] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054445] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.180442] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.118354] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.298091] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[ +15.219524] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.062318] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.881800] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +5.252218] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.851869] systemd-fstab-generator[1985]: Ignoring "noauto" option for root device
	[  +3.208937] kauditd_printk_skb: 61 callbacks suppressed
	[Sep30 21:08] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] <==
	{"level":"warn","ts":"2024-09-30T21:08:39.447556Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T21:08:38.954951Z","time spent":"492.598285ms","remote":"127.0.0.1:47654","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4361,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-c2wpn\" "}
	{"level":"warn","ts":"2024-09-30T21:08:39.447566Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"465.383363ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T21:08:39.447983Z","caller":"traceutil/trace.go:171","msg":"trace[1759551358] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:620; }","duration":"465.799156ms","start":"2024-09-30T21:08:38.982176Z","end":"2024-09-30T21:08:39.447975Z","steps":["trace[1759551358] 'agreement among raft nodes before linearized reading'  (duration: 465.373988ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T21:17:50.750354Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":839}
	{"level":"info","ts":"2024-09-30T21:17:50.762227Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":839,"took":"11.016676ms","hash":3124043443,"current-db-size-bytes":2764800,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2764800,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-09-30T21:17:50.762359Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3124043443,"revision":839,"compact-revision":-1}
	{"level":"info","ts":"2024-09-30T21:22:50.757470Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1082}
	{"level":"info","ts":"2024-09-30T21:22:50.762148Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1082,"took":"4.395391ms","hash":484073405,"current-db-size-bytes":2764800,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1679360,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-30T21:22:50.762201Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":484073405,"revision":1082,"compact-revision":839}
	{"level":"info","ts":"2024-09-30T21:27:50.769877Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1325}
	{"level":"info","ts":"2024-09-30T21:27:50.774924Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1325,"took":"4.28939ms","hash":2597520281,"current-db-size-bytes":2764800,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1613824,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-30T21:27:50.775080Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2597520281,"revision":1325,"compact-revision":1082}
	{"level":"warn","ts":"2024-09-30T21:28:04.176808Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.127219ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T21:28:04.176921Z","caller":"traceutil/trace.go:171","msg":"trace[152816960] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1578; }","duration":"195.330542ms","start":"2024-09-30T21:28:03.981565Z","end":"2024-09-30T21:28:04.176896Z","steps":["trace[152816960] 'range keys from in-memory index tree'  (duration: 195.103066ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T21:28:04.177346Z","caller":"traceutil/trace.go:171","msg":"trace[340950780] linearizableReadLoop","detail":"{readStateIndex:1867; appliedIndex:1866; }","duration":"312.94653ms","start":"2024-09-30T21:28:03.864383Z","end":"2024-09-30T21:28:04.177329Z","steps":["trace[340950780] 'read index received'  (duration: 250.851734ms)","trace[340950780] 'applied index is now lower than readState.Index'  (duration: 62.094213ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T21:28:04.177438Z","caller":"traceutil/trace.go:171","msg":"trace[589247201] transaction","detail":"{read_only:false; response_revision:1579; number_of_response:1; }","duration":"335.486108ms","start":"2024-09-30T21:28:03.841941Z","end":"2024-09-30T21:28:04.177427Z","steps":["trace[589247201] 'process raft request'  (duration: 273.365162ms)","trace[589247201] 'compare'  (duration: 61.937493ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T21:28:04.177577Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T21:28:03.841924Z","time spent":"335.546696ms","remote":"127.0.0.1:47472","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.93\" mod_revision:1571 > success:<request_put:<key:\"/registry/masterleases/192.168.61.93\" value_size:66 lease:3290603301770918022 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.93\" > >"}
	{"level":"warn","ts":"2024-09-30T21:28:04.177867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"313.478008ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-c2wpn.17fa21b7b5cf0260\" ","response":"range_response_count:1 size:826"}
	{"level":"info","ts":"2024-09-30T21:28:04.177912Z","caller":"traceutil/trace.go:171","msg":"trace[1191770336] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-6867b74b74-c2wpn.17fa21b7b5cf0260; range_end:; response_count:1; response_revision:1579; }","duration":"313.522112ms","start":"2024-09-30T21:28:03.864379Z","end":"2024-09-30T21:28:04.177901Z","steps":["trace[1191770336] 'agreement among raft nodes before linearized reading'  (duration: 313.348814ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:28:04.177936Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T21:28:03.864343Z","time spent":"313.586534ms","remote":"127.0.0.1:47550","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":848,"request content":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-c2wpn.17fa21b7b5cf0260\" "}
	{"level":"info","ts":"2024-09-30T21:28:04.370193Z","caller":"traceutil/trace.go:171","msg":"trace[2095262169] linearizableReadLoop","detail":"{readStateIndex:1868; appliedIndex:1867; }","duration":"188.829575ms","start":"2024-09-30T21:28:04.181343Z","end":"2024-09-30T21:28:04.370173Z","steps":["trace[2095262169] 'read index received'  (duration: 118.248612ms)","trace[2095262169] 'applied index is now lower than readState.Index'  (duration: 70.580033ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T21:28:04.370599Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.257827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-09-30T21:28:04.370636Z","caller":"traceutil/trace.go:171","msg":"trace[667959213] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:1579; }","duration":"189.307015ms","start":"2024-09-30T21:28:04.181319Z","end":"2024-09-30T21:28:04.370626Z","steps":["trace[667959213] 'agreement among raft nodes before linearized reading'  (duration: 189.209514ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:28:04.370773Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.920698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T21:28:04.370822Z","caller":"traceutil/trace.go:171","msg":"trace[245342410] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1579; }","duration":"125.963589ms","start":"2024-09-30T21:28:04.244844Z","end":"2024-09-30T21:28:04.370808Z","steps":["trace[245342410] 'agreement among raft nodes before linearized reading'  (duration: 125.909912ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:28:22 up 21 min,  0 users,  load average: 0.25, 0.13, 0.10
	Linux no-preload-997816 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] <==
	I0930 21:23:53.441960       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:23:53.442032       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 21:25:53.442506       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:25:53.442794       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0930 21:25:53.442861       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:25:53.442933       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0930 21:25:53.444098       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:25:53.444170       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 21:27:52.442711       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:27:52.442852       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0930 21:27:53.444416       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:27:53.444466       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0930 21:27:53.444626       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:27:53.444813       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0930 21:27:53.445603       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:27:53.446705       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] <==
	E0930 21:22:56.159274       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:22:56.608863       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:23:26.165185       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:23:26.616486       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0930 21:23:40.757153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-997816"
	E0930 21:23:56.175794       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:23:56.623757       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0930 21:24:09.874915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="240.45µs"
	I0930 21:24:24.875311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="52.379µs"
	E0930 21:24:26.182099       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:24:26.630657       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:24:56.188382       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:24:56.638359       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:25:26.195854       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:25:26.647878       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:25:56.205796       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:25:56.657268       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:26:26.212524       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:26:26.664916       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:26:56.218869       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:26:56.672217       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:27:26.225372       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:27:26.680230       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:27:56.238611       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:27:56.689063       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 21:07:53.718933       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 21:07:53.744146       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.93"]
	E0930 21:07:53.744240       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 21:07:53.782382       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 21:07:53.782467       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 21:07:53.782492       1 server_linux.go:169] "Using iptables Proxier"
	I0930 21:07:53.785794       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 21:07:53.786641       1 server.go:483] "Version info" version="v1.31.1"
	I0930 21:07:53.786714       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 21:07:53.790873       1 config.go:199] "Starting service config controller"
	I0930 21:07:53.791413       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 21:07:53.791476       1 config.go:328] "Starting node config controller"
	I0930 21:07:53.791484       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 21:07:53.792307       1 config.go:105] "Starting endpoint slice config controller"
	I0930 21:07:53.796799       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 21:07:53.892150       1 shared_informer.go:320] Caches are synced for service config
	I0930 21:07:53.892250       1 shared_informer.go:320] Caches are synced for node config
	I0930 21:07:53.897706       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] <==
	I0930 21:07:50.648341       1 serving.go:386] Generated self-signed cert in-memory
	W0930 21:07:52.340044       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 21:07:52.340207       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 21:07:52.340299       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 21:07:52.340334       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 21:07:52.448521       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 21:07:52.449042       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 21:07:52.457582       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 21:07:52.457892       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 21:07:52.458121       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 21:07:52.458634       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 21:07:52.564671       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 21:27:08 no-preload-997816 kubelet[1362]: E0930 21:27:08.172385    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731628171563369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:18 no-preload-997816 kubelet[1362]: E0930 21:27:18.173793    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731638173464482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:18 no-preload-997816 kubelet[1362]: E0930 21:27:18.173822    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731638173464482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:21 no-preload-997816 kubelet[1362]: E0930 21:27:21.858946    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c2wpn" podUID="2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82"
	Sep 30 21:27:28 no-preload-997816 kubelet[1362]: E0930 21:27:28.175950    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731648175474887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:28 no-preload-997816 kubelet[1362]: E0930 21:27:28.176592    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731648175474887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:35 no-preload-997816 kubelet[1362]: E0930 21:27:35.859380    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c2wpn" podUID="2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82"
	Sep 30 21:27:38 no-preload-997816 kubelet[1362]: E0930 21:27:38.178702    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731658178227592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:38 no-preload-997816 kubelet[1362]: E0930 21:27:38.178771    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731658178227592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:47 no-preload-997816 kubelet[1362]: E0930 21:27:47.892199    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 21:27:47 no-preload-997816 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 21:27:47 no-preload-997816 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 21:27:47 no-preload-997816 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 21:27:47 no-preload-997816 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 21:27:48 no-preload-997816 kubelet[1362]: E0930 21:27:48.181376    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731668180717251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:48 no-preload-997816 kubelet[1362]: E0930 21:27:48.181403    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731668180717251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:48 no-preload-997816 kubelet[1362]: E0930 21:27:48.859033    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c2wpn" podUID="2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82"
	Sep 30 21:27:58 no-preload-997816 kubelet[1362]: E0930 21:27:58.184373    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731678183724756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:58 no-preload-997816 kubelet[1362]: E0930 21:27:58.184510    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731678183724756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:03 no-preload-997816 kubelet[1362]: E0930 21:28:03.862148    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c2wpn" podUID="2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82"
	Sep 30 21:28:08 no-preload-997816 kubelet[1362]: E0930 21:28:08.187698    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731688187150378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:08 no-preload-997816 kubelet[1362]: E0930 21:28:08.188254    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731688187150378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:15 no-preload-997816 kubelet[1362]: E0930 21:28:15.859740    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c2wpn" podUID="2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82"
	Sep 30 21:28:18 no-preload-997816 kubelet[1362]: E0930 21:28:18.190040    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731698189460670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:18 no-preload-997816 kubelet[1362]: E0930 21:28:18.190085    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731698189460670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] <==
	I0930 21:07:53.490486       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0930 21:08:23.494094       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] <==
	I0930 21:08:24.187080       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 21:08:24.197839       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 21:08:24.197914       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 21:08:41.595770       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 21:08:41.595909       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-997816_e88cb4cb-9add-4a3c-a8e3-f398658279d5!
	I0930 21:08:41.596361       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a28e79c-ce2e-4eb8-a175-ad56e6ab22b2", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-997816_e88cb4cb-9add-4a3c-a8e3-f398658279d5 became leader
	I0930 21:08:41.696101       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-997816_e88cb4cb-9add-4a3c-a8e3-f398658279d5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997816 -n no-preload-997816
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-997816 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-c2wpn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-997816 describe pod metrics-server-6867b74b74-c2wpn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-997816 describe pod metrics-server-6867b74b74-c2wpn: exit status 1 (61.079432ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-c2wpn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-997816 describe pod metrics-server-6867b74b74-c2wpn: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (420.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (438.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-291511 -n default-k8s-diff-port-291511
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-30 21:28:47.3347426 +0000 UTC m=+6648.039498732
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-291511 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-291511 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.553µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-291511 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-291511 -n default-k8s-diff-port-291511
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-291511 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-291511 logs -n 25: (1.130765612s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 21:00 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-256103            | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-997816             | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-291511  | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-621406        | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256103                 | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-997816                  | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-291511       | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:12 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-621406             | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:27 UTC | 30 Sep 24 21:27 UTC |
	| start   | -p newest-cni-921796 --memory=2200 --alsologtostderr   | newest-cni-921796            | jenkins | v1.34.0 | 30 Sep 24 21:27 UTC | 30 Sep 24 21:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-921796             | newest-cni-921796            | jenkins | v1.34.0 | 30 Sep 24 21:28 UTC | 30 Sep 24 21:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-921796                                   | newest-cni-921796            | jenkins | v1.34.0 | 30 Sep 24 21:28 UTC | 30 Sep 24 21:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:28 UTC | 30 Sep 24 21:28 UTC |
	| addons  | enable dashboard -p newest-cni-921796                  | newest-cni-921796            | jenkins | v1.34.0 | 30 Sep 24 21:28 UTC | 30 Sep 24 21:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-921796 --memory=2200 --alsologtostderr   | newest-cni-921796            | jenkins | v1.34.0 | 30 Sep 24 21:28 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:28 UTC |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 21:28:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 21:28:29.558859   81355 out.go:345] Setting OutFile to fd 1 ...
	I0930 21:28:29.558975   81355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:28:29.558982   81355 out.go:358] Setting ErrFile to fd 2...
	I0930 21:28:29.558986   81355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:28:29.559151   81355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 21:28:29.559714   81355 out.go:352] Setting JSON to false
	I0930 21:28:29.560703   81355 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7853,"bootTime":1727723857,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 21:28:29.560798   81355 start.go:139] virtualization: kvm guest
	I0930 21:28:29.563045   81355 out.go:177] * [newest-cni-921796] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 21:28:29.564360   81355 notify.go:220] Checking for updates...
	I0930 21:28:29.564381   81355 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 21:28:29.565658   81355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 21:28:29.566901   81355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:28:29.568143   81355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 21:28:29.569360   81355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 21:28:29.570558   81355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 21:28:29.572058   81355 config.go:182] Loaded profile config "newest-cni-921796": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:28:29.572472   81355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:28:29.572535   81355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:28:29.588218   81355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43021
	I0930 21:28:29.588631   81355 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:28:29.589174   81355 main.go:141] libmachine: Using API Version  1
	I0930 21:28:29.589195   81355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:28:29.589488   81355 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:28:29.589649   81355 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:28:29.589849   81355 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 21:28:29.590124   81355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:28:29.590156   81355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:28:29.605458   81355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0930 21:28:29.605868   81355 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:28:29.606277   81355 main.go:141] libmachine: Using API Version  1
	I0930 21:28:29.606300   81355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:28:29.606643   81355 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:28:29.606843   81355 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:28:29.643495   81355 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 21:28:29.644653   81355 start.go:297] selected driver: kvm2
	I0930 21:28:29.644670   81355 start.go:901] validating driver "kvm2" against &{Name:newest-cni-921796 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-921796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:28:29.644792   81355 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 21:28:29.645576   81355 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:28:29.645668   81355 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 21:28:29.660861   81355 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 21:28:29.661262   81355 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0930 21:28:29.661288   81355 cni.go:84] Creating CNI manager for ""
	I0930 21:28:29.661333   81355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:28:29.661382   81355 start.go:340] cluster config:
	{Name:newest-cni-921796 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-921796 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:28:29.661510   81355 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:28:29.663389   81355 out.go:177] * Starting "newest-cni-921796" primary control-plane node in "newest-cni-921796" cluster
	I0930 21:28:29.664689   81355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:28:29.664735   81355 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 21:28:29.664747   81355 cache.go:56] Caching tarball of preloaded images
	I0930 21:28:29.664881   81355 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 21:28:29.664902   81355 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 21:28:29.665032   81355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/config.json ...
	I0930 21:28:29.665254   81355 start.go:360] acquireMachinesLock for newest-cni-921796: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:28:29.665313   81355 start.go:364] duration metric: took 38.485µs to acquireMachinesLock for "newest-cni-921796"
	I0930 21:28:29.665338   81355 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:28:29.665344   81355 fix.go:54] fixHost starting: 
	I0930 21:28:29.665723   81355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:28:29.665777   81355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:28:29.681423   81355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I0930 21:28:29.681812   81355 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:28:29.682317   81355 main.go:141] libmachine: Using API Version  1
	I0930 21:28:29.682339   81355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:28:29.682671   81355 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:28:29.682875   81355 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:28:29.683049   81355 main.go:141] libmachine: (newest-cni-921796) Calling .GetState
	I0930 21:28:29.684786   81355 fix.go:112] recreateIfNeeded on newest-cni-921796: state=Stopped err=<nil>
	I0930 21:28:29.684821   81355 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	W0930 21:28:29.684966   81355 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:28:29.687015   81355 out.go:177] * Restarting existing kvm2 VM for "newest-cni-921796" ...
	I0930 21:28:29.688165   81355 main.go:141] libmachine: (newest-cni-921796) Calling .Start
	I0930 21:28:29.688328   81355 main.go:141] libmachine: (newest-cni-921796) Ensuring networks are active...
	I0930 21:28:29.689157   81355 main.go:141] libmachine: (newest-cni-921796) Ensuring network default is active
	I0930 21:28:29.689514   81355 main.go:141] libmachine: (newest-cni-921796) Ensuring network mk-newest-cni-921796 is active
	I0930 21:28:29.689981   81355 main.go:141] libmachine: (newest-cni-921796) Getting domain xml...
	I0930 21:28:29.690777   81355 main.go:141] libmachine: (newest-cni-921796) Creating domain...
	I0930 21:28:30.946144   81355 main.go:141] libmachine: (newest-cni-921796) Waiting to get IP...
	I0930 21:28:30.947109   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:30.947584   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:30.947674   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:30.947561   81390 retry.go:31] will retry after 251.042431ms: waiting for machine to come up
	I0930 21:28:31.200216   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:31.200819   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:31.200848   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:31.200769   81390 retry.go:31] will retry after 366.226522ms: waiting for machine to come up
	I0930 21:28:31.568201   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:31.568704   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:31.568731   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:31.568657   81390 retry.go:31] will retry after 482.419405ms: waiting for machine to come up
	I0930 21:28:32.052302   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:32.052679   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:32.052713   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:32.052645   81390 retry.go:31] will retry after 462.599845ms: waiting for machine to come up
	I0930 21:28:32.517500   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:32.517976   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:32.518003   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:32.517939   81390 retry.go:31] will retry after 748.053277ms: waiting for machine to come up
	I0930 21:28:33.268010   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:33.268444   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:33.268473   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:33.268392   81390 retry.go:31] will retry after 838.241896ms: waiting for machine to come up
	I0930 21:28:34.108458   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:34.108907   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:34.108932   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:34.108882   81390 retry.go:31] will retry after 956.573985ms: waiting for machine to come up
	I0930 21:28:35.066602   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:35.067104   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:35.067130   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:35.067037   81390 retry.go:31] will retry after 1.244540008s: waiting for machine to come up
	I0930 21:28:36.312866   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:36.313305   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:36.313332   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:36.313245   81390 retry.go:31] will retry after 1.627867866s: waiting for machine to come up
	I0930 21:28:37.942984   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:37.943436   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:37.943459   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:37.943398   81390 retry.go:31] will retry after 1.771233504s: waiting for machine to come up
	I0930 21:28:39.716783   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:39.717345   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:39.717371   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:39.717289   81390 retry.go:31] will retry after 2.112399171s: waiting for machine to come up
	I0930 21:28:41.832273   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:41.832745   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:41.832766   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:41.832712   81390 retry.go:31] will retry after 2.972790213s: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.572532381Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731728572456805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23c438c0-7932-4797-8c67-0d7c80fe5086 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.573214101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46ff6a93-3585-4d35-8e82-8e318f366054 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.573287298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46ff6a93-3585-4d35-8e82-8e318f366054 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.573671763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730514306129258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ee5915b6ae16b96cd663ee230ec2be38c102dc2fa2dc69df5ab339dc8491be,PodSandboxId:222548d08e8ca6dedc5cefa4101645feb196c7513bf31036f3b2ad6fa8a480ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730494782013233,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34406fdf-7b58-4457-ae9f-712885f7dd29,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49,PodSandboxId:e1c9eb6432e4d71ab5da7fbf52fbc0ae5e06c3c3e846e61d3afdf121e8dce90c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730491188667347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdjjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5672cd58-4d3f-409e-b279-f4027fe09aea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8,PodSandboxId:42211a70b47f66293db0d93fab4943057f14074d5ef5295ac87fc17e7920c604,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730483519285586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kwp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e5295f-3
aaa-4222-a61a-942354f79f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730483505090273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70
-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711,PodSandboxId:f79dc667d99fdb19116453c544fd2237d1d54bbcaab691521d0e060e788947f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730478833366464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fece16652c16bcf190a3661de3d4efe0,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4,PodSandboxId:d49bb2fcbc5f1ed5d4230afdcfb01762dfbd7f34d75b5250e1fe6ef46d571e06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730478747062794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e89d6165ff01d08a4db0c2b1d86676,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140,PodSandboxId:e10f5499f6f3cc25491e1828871ddde819bb03b833cc49805b280430b8f24e8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730478773216547,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a4ca8c9198bea8670b6f35051fdd299,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8,PodSandboxId:e2b3fcdb417f9947d8b24abe8415a54815bbb4ec75b831eb72a302c1eef787b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730478768576247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180e5819899b337683f2e15f3bad06
9a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46ff6a93-3585-4d35-8e82-8e318f366054 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.610157154Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b266475d-b08a-4e8e-a5a1-fa88463cd497 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.610240564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b266475d-b08a-4e8e-a5a1-fa88463cd497 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.611338087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ae6dbec-5a37-4199-b842-d4713397440b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.612035766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731728612009471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ae6dbec-5a37-4199-b842-d4713397440b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.612772085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc8c47eb-d56d-47d0-a034-03bdc619fcd2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.613212116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc8c47eb-d56d-47d0-a034-03bdc619fcd2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.613714980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730514306129258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ee5915b6ae16b96cd663ee230ec2be38c102dc2fa2dc69df5ab339dc8491be,PodSandboxId:222548d08e8ca6dedc5cefa4101645feb196c7513bf31036f3b2ad6fa8a480ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730494782013233,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34406fdf-7b58-4457-ae9f-712885f7dd29,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49,PodSandboxId:e1c9eb6432e4d71ab5da7fbf52fbc0ae5e06c3c3e846e61d3afdf121e8dce90c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730491188667347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdjjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5672cd58-4d3f-409e-b279-f4027fe09aea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8,PodSandboxId:42211a70b47f66293db0d93fab4943057f14074d5ef5295ac87fc17e7920c604,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730483519285586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kwp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e5295f-3
aaa-4222-a61a-942354f79f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730483505090273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70
-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711,PodSandboxId:f79dc667d99fdb19116453c544fd2237d1d54bbcaab691521d0e060e788947f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730478833366464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fece16652c16bcf190a3661de3d4efe0,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4,PodSandboxId:d49bb2fcbc5f1ed5d4230afdcfb01762dfbd7f34d75b5250e1fe6ef46d571e06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730478747062794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e89d6165ff01d08a4db0c2b1d86676,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140,PodSandboxId:e10f5499f6f3cc25491e1828871ddde819bb03b833cc49805b280430b8f24e8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730478773216547,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a4ca8c9198bea8670b6f35051fdd299,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8,PodSandboxId:e2b3fcdb417f9947d8b24abe8415a54815bbb4ec75b831eb72a302c1eef787b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730478768576247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180e5819899b337683f2e15f3bad06
9a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc8c47eb-d56d-47d0-a034-03bdc619fcd2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.649364362Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb5ab551-0d01-4535-bd7d-0a85e289112c name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.649447340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb5ab551-0d01-4535-bd7d-0a85e289112c name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.650884630Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3363ec35-3416-449d-b812-b49482d8431a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.651287771Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731728651265440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3363ec35-3416-449d-b812-b49482d8431a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.651894698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=faca58ea-78bd-474e-b28e-47251540e01c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.651957127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=faca58ea-78bd-474e-b28e-47251540e01c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.652165060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730514306129258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ee5915b6ae16b96cd663ee230ec2be38c102dc2fa2dc69df5ab339dc8491be,PodSandboxId:222548d08e8ca6dedc5cefa4101645feb196c7513bf31036f3b2ad6fa8a480ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730494782013233,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34406fdf-7b58-4457-ae9f-712885f7dd29,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49,PodSandboxId:e1c9eb6432e4d71ab5da7fbf52fbc0ae5e06c3c3e846e61d3afdf121e8dce90c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730491188667347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdjjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5672cd58-4d3f-409e-b279-f4027fe09aea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8,PodSandboxId:42211a70b47f66293db0d93fab4943057f14074d5ef5295ac87fc17e7920c604,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730483519285586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kwp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e5295f-3
aaa-4222-a61a-942354f79f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730483505090273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70
-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711,PodSandboxId:f79dc667d99fdb19116453c544fd2237d1d54bbcaab691521d0e060e788947f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730478833366464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fece16652c16bcf190a3661de3d4efe0,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4,PodSandboxId:d49bb2fcbc5f1ed5d4230afdcfb01762dfbd7f34d75b5250e1fe6ef46d571e06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730478747062794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e89d6165ff01d08a4db0c2b1d86676,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140,PodSandboxId:e10f5499f6f3cc25491e1828871ddde819bb03b833cc49805b280430b8f24e8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730478773216547,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a4ca8c9198bea8670b6f35051fdd299,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8,PodSandboxId:e2b3fcdb417f9947d8b24abe8415a54815bbb4ec75b831eb72a302c1eef787b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730478768576247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180e5819899b337683f2e15f3bad06
9a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=faca58ea-78bd-474e-b28e-47251540e01c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.684027473Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36955b0a-efb1-4b92-a524-2a4256f3294a name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.684118183Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36955b0a-efb1-4b92-a524-2a4256f3294a name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.686425885Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ffbc8a73-b675-4cbd-8494-c74d8f7d07cf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.686972542Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731728686946340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ffbc8a73-b675-4cbd-8494-c74d8f7d07cf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.687498145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=368ee72f-01f3-42c9-9d34-ba1ce751017a name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.687566579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=368ee72f-01f3-42c9-9d34-ba1ce751017a name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:48 default-k8s-diff-port-291511 crio[720]: time="2024-09-30 21:28:48.687802243Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730514306129258,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ee5915b6ae16b96cd663ee230ec2be38c102dc2fa2dc69df5ab339dc8491be,PodSandboxId:222548d08e8ca6dedc5cefa4101645feb196c7513bf31036f3b2ad6fa8a480ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727730494782013233,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34406fdf-7b58-4457-ae9f-712885f7dd29,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49,PodSandboxId:e1c9eb6432e4d71ab5da7fbf52fbc0ae5e06c3c3e846e61d3afdf121e8dce90c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730491188667347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdjjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5672cd58-4d3f-409e-b279-f4027fe09aea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8,PodSandboxId:42211a70b47f66293db0d93fab4943057f14074d5ef5295ac87fc17e7920c604,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727730483519285586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kwp22,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e5295f-3
aaa-4222-a61a-942354f79f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342,PodSandboxId:98b3fb072cb5d251782ad741ebbe39fd8cad18d6c7df8800b4a19bb003bdde07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727730483505090273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32053345-1ff9-45b1-aa70
-e746926b305d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711,PodSandboxId:f79dc667d99fdb19116453c544fd2237d1d54bbcaab691521d0e060e788947f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730478833366464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fece16652c16bcf190a3661de3d4efe0,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4,PodSandboxId:d49bb2fcbc5f1ed5d4230afdcfb01762dfbd7f34d75b5250e1fe6ef46d571e06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730478747062794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5e89d6165ff01d08a4db0c2b1d86676,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140,PodSandboxId:e10f5499f6f3cc25491e1828871ddde819bb03b833cc49805b280430b8f24e8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730478773216547,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a4ca8c9198bea8670b6f35051fdd299,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8,PodSandboxId:e2b3fcdb417f9947d8b24abe8415a54815bbb4ec75b831eb72a302c1eef787b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730478768576247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-291511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180e5819899b337683f2e15f3bad06
9a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=368ee72f-01f3-42c9-9d34-ba1ce751017a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3f81706851d1c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       3                   98b3fb072cb5d       storage-provisioner
	a1ee5915b6ae1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   222548d08e8ca       busybox
	ec71e052062dc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   e1c9eb6432e4d       coredns-7c65d6cfc9-hdjjq
	5e4ebb7ceb7e6       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      20 minutes ago      Running             kube-proxy                1                   42211a70b47f6       kube-proxy-kwp22
	1822eaafdd4d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       2                   98b3fb072cb5d       storage-provisioner
	7e53b1ee3c16b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   f79dc667d99fd       etcd-default-k8s-diff-port-291511
	f197afcf3b28b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      20 minutes ago      Running             kube-apiserver            1                   e10f5499f6f3c       kube-apiserver-default-k8s-diff-port-291511
	d1119782e608c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      20 minutes ago      Running             kube-controller-manager   1                   e2b3fcdb417f9       kube-controller-manager-default-k8s-diff-port-291511
	0a84556ba1073       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      20 minutes ago      Running             kube-scheduler            1                   d49bb2fcbc5f1       kube-scheduler-default-k8s-diff-port-291511
	
	
	==> coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50516 - 34933 "HINFO IN 5976675863271297143.6033242033858797482. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013316297s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-291511
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-291511
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=default-k8s-diff-port-291511
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T21_00_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 20:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-291511
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 21:28:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 21:23:51 +0000   Mon, 30 Sep 2024 20:59:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 21:23:51 +0000   Mon, 30 Sep 2024 20:59:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 21:23:51 +0000   Mon, 30 Sep 2024 20:59:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 21:23:51 +0000   Mon, 30 Sep 2024 21:08:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.2
	  Hostname:    default-k8s-diff-port-291511
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d5c8e195f4341288565205b7d02a6d2
	  System UUID:                1d5c8e19-5f43-4128-8565-205b7d02a6d2
	  Boot ID:                    e07d2f31-3d59-4b81-bb95-03dc31c61a54
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-hdjjq                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-default-k8s-diff-port-291511                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-291511             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-291511    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-kwp22                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-291511             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-txb2j                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         27m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-291511 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-291511 event: Registered Node default-k8s-diff-port-291511 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-291511 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-291511 event: Registered Node default-k8s-diff-port-291511 in Controller
	
	
	==> dmesg <==
	[Sep30 21:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051482] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039217] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.841694] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.954501] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.579250] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.147945] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.064668] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068590] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.205113] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.150720] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.316837] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +4.363768] systemd-fstab-generator[801]: Ignoring "noauto" option for root device
	[  +0.056710] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.283674] systemd-fstab-generator[921]: Ignoring "noauto" option for root device
	[Sep30 21:08] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.891051] systemd-fstab-generator[1541]: Ignoring "noauto" option for root device
	[  +3.789527] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.736069] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] <==
	{"level":"warn","ts":"2024-09-30T21:08:17.339998Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"448.223216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-291511\" ","response":"range_response_count:1 size:5535"}
	{"level":"info","ts":"2024-09-30T21:08:17.340106Z","caller":"traceutil/trace.go:171","msg":"trace[2012609444] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-291511; range_end:; response_count:1; response_revision:640; }","duration":"448.281864ms","start":"2024-09-30T21:08:16.891745Z","end":"2024-09-30T21:08:17.340027Z","steps":["trace[2012609444] 'agreement among raft nodes before linearized reading'  (duration: 448.148639ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:08:17.340147Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T21:08:16.891703Z","time spent":"448.43399ms","remote":"127.0.0.1:43798","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5558,"request content":"key:\"/registry/minions/default-k8s-diff-port-291511\" "}
	{"level":"warn","ts":"2024-09-30T21:08:17.340303Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.240405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T21:08:17.340334Z","caller":"traceutil/trace.go:171","msg":"trace[835079800] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:640; }","duration":"279.273139ms","start":"2024-09-30T21:08:17.061056Z","end":"2024-09-30T21:08:17.340329Z","steps":["trace[835079800] 'agreement among raft nodes before linearized reading'  (duration: 279.228434ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:08:39.057544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.491727ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2569464411440859234 > lease_revoke:<id:23a89244c2f0c7e0>","response":"size:28"}
	{"level":"info","ts":"2024-09-30T21:18:00.843699Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":883}
	{"level":"info","ts":"2024-09-30T21:18:00.855489Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":883,"took":"11.403889ms","hash":2973093467,"current-db-size-bytes":2879488,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2879488,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-09-30T21:18:00.855550Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2973093467,"revision":883,"compact-revision":-1}
	{"level":"info","ts":"2024-09-30T21:23:00.850973Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1126}
	{"level":"info","ts":"2024-09-30T21:23:00.855749Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1126,"took":"4.264562ms","hash":1178701655,"current-db-size-bytes":2879488,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1654784,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-30T21:23:00.855859Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1178701655,"revision":1126,"compact-revision":883}
	{"level":"info","ts":"2024-09-30T21:28:00.858716Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1370}
	{"level":"info","ts":"2024-09-30T21:28:00.863915Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1370,"took":"4.721109ms","hash":551758022,"current-db-size-bytes":2879488,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1626112,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-30T21:28:00.864006Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":551758022,"revision":1370,"compact-revision":1126}
	{"level":"warn","ts":"2024-09-30T21:28:03.971664Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.271141ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T21:28:03.971837Z","caller":"traceutil/trace.go:171","msg":"trace[895156850] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1614; }","duration":"157.535083ms","start":"2024-09-30T21:28:03.814266Z","end":"2024-09-30T21:28:03.971802Z","steps":["trace[895156850] 'range keys from in-memory index tree'  (duration: 157.256225ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:28:03.972656Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.888179ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2569464411440866911 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:23a89244c2f0e65e>","response":"size:40"}
	{"level":"info","ts":"2024-09-30T21:28:03.972748Z","caller":"traceutil/trace.go:171","msg":"trace[212165741] linearizableReadLoop","detail":"{readStateIndex:1902; appliedIndex:1901; }","duration":"328.684291ms","start":"2024-09-30T21:28:03.644051Z","end":"2024-09-30T21:28:03.972736Z","steps":["trace[212165741] 'read index received'  (duration: 205.506644ms)","trace[212165741] 'applied index is now lower than readState.Index'  (duration: 123.176642ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T21:28:03.972825Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T21:28:03.620503Z","time spent":"352.312852ms","remote":"127.0.0.1:43620","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-09-30T21:28:03.973018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.794374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-30T21:28:03.973631Z","caller":"traceutil/trace.go:171","msg":"trace[2072182906] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:1614; }","duration":"247.415263ms","start":"2024-09-30T21:28:03.726205Z","end":"2024-09-30T21:28:03.973620Z","steps":["trace[2072182906] 'agreement among raft nodes before linearized reading'  (duration: 246.701316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:28:03.973107Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"329.051806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T21:28:03.973786Z","caller":"traceutil/trace.go:171","msg":"trace[486461874] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1614; }","duration":"329.745164ms","start":"2024-09-30T21:28:03.644033Z","end":"2024-09-30T21:28:03.973778Z","steps":["trace[486461874] 'agreement among raft nodes before linearized reading'  (duration: 329.033308ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:28:03.973836Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T21:28:03.643995Z","time spent":"329.825072ms","remote":"127.0.0.1:43808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	
	
	==> kernel <==
	 21:28:49 up 21 min,  0 users,  load average: 0.04, 0.07, 0.04
	Linux default-k8s-diff-port-291511 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] <==
	I0930 21:24:03.305418       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:24:03.305493       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 21:26:03.306023       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:26:03.306156       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0930 21:26:03.306353       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:26:03.306443       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0930 21:26:03.307294       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:26:03.308445       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 21:28:02.304146       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:28:02.304514       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0930 21:28:03.307286       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:28:03.307377       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0930 21:28:03.307537       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:28:03.307635       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0930 21:28:03.308507       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:28:03.309649       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] <==
	E0930 21:23:35.983335       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:23:36.453126       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0930 21:23:51.372034       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-291511"
	E0930 21:24:05.990029       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:24:06.461306       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0930 21:24:20.116294       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="226.032µs"
	I0930 21:24:35.115083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="53.206µs"
	E0930 21:24:35.996204       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:24:36.469490       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:25:06.003645       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:25:06.476790       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:25:36.010334       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:25:36.484514       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:26:06.017023       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:26:06.492240       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:26:36.023434       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:26:36.499522       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:27:06.030420       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:27:06.508254       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:27:36.036664       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:27:36.515830       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:28:06.043488       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:28:06.524366       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:28:36.049493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:28:36.532359       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 21:08:03.791140       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 21:08:03.808068       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.2"]
	E0930 21:08:03.808182       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 21:08:03.855007       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 21:08:03.855054       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 21:08:03.855080       1 server_linux.go:169] "Using iptables Proxier"
	I0930 21:08:03.864236       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 21:08:03.864483       1 server.go:483] "Version info" version="v1.31.1"
	I0930 21:08:03.864509       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 21:08:03.866358       1 config.go:199] "Starting service config controller"
	I0930 21:08:03.866385       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 21:08:03.866408       1 config.go:105] "Starting endpoint slice config controller"
	I0930 21:08:03.866412       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 21:08:03.866828       1 config.go:328] "Starting node config controller"
	I0930 21:08:03.866835       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 21:08:03.967048       1 shared_informer.go:320] Caches are synced for node config
	I0930 21:08:03.967185       1 shared_informer.go:320] Caches are synced for service config
	I0930 21:08:03.967196       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] <==
	I0930 21:07:59.719109       1 serving.go:386] Generated self-signed cert in-memory
	W0930 21:08:02.203676       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 21:08:02.203821       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 21:08:02.204392       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 21:08:02.204493       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 21:08:02.245712       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 21:08:02.245852       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 21:08:02.247843       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 21:08:02.248122       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 21:08:02.248204       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 21:08:02.248303       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 21:08:02.349128       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 21:27:41 default-k8s-diff-port-291511 kubelet[928]: E0930 21:27:41.101783     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-txb2j" podUID="6f0ec8d2-5528-4f70-807c-42cbabae23bb"
	Sep 30 21:27:48 default-k8s-diff-port-291511 kubelet[928]: E0930 21:27:48.377121     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731668376669231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:48 default-k8s-diff-port-291511 kubelet[928]: E0930 21:27:48.377164     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731668376669231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:55 default-k8s-diff-port-291511 kubelet[928]: E0930 21:27:55.101490     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-txb2j" podUID="6f0ec8d2-5528-4f70-807c-42cbabae23bb"
	Sep 30 21:27:58 default-k8s-diff-port-291511 kubelet[928]: E0930 21:27:58.122584     928 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 21:27:58 default-k8s-diff-port-291511 kubelet[928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 21:27:58 default-k8s-diff-port-291511 kubelet[928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 21:27:58 default-k8s-diff-port-291511 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 21:27:58 default-k8s-diff-port-291511 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 21:27:58 default-k8s-diff-port-291511 kubelet[928]: E0930 21:27:58.382549     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731678381455389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:58 default-k8s-diff-port-291511 kubelet[928]: E0930 21:27:58.382650     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731678381455389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:08 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:08.385932     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731688384917938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:08 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:08.386678     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731688384917938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:09 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:09.101385     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-txb2j" podUID="6f0ec8d2-5528-4f70-807c-42cbabae23bb"
	Sep 30 21:28:18 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:18.388967     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731698388494343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:18 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:18.389003     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731698388494343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:21 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:21.101849     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-txb2j" podUID="6f0ec8d2-5528-4f70-807c-42cbabae23bb"
	Sep 30 21:28:28 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:28.390702     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731708390223642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:28 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:28.390756     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731708390223642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:33 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:33.101696     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-txb2j" podUID="6f0ec8d2-5528-4f70-807c-42cbabae23bb"
	Sep 30 21:28:38 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:38.392986     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731718392339687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:38 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:38.393029     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731718392339687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:46 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:46.101429     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-txb2j" podUID="6f0ec8d2-5528-4f70-807c-42cbabae23bb"
	Sep 30 21:28:48 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:48.395225     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731728394870070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:48 default-k8s-diff-port-291511 kubelet[928]: E0930 21:28:48.395266     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731728394870070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] <==
	I0930 21:08:03.609448       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0930 21:08:33.612813       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] <==
	I0930 21:08:34.404003       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 21:08:34.412916       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 21:08:34.412979       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 21:08:51.810522       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 21:08:51.810832       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-291511_7484d4d4-6fb4-4e7f-b333-81f608b5f818!
	I0930 21:08:51.811532       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"205929d7-019f-4a3b-b8c3-1a0ccd9e6e0d", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-291511_7484d4d4-6fb4-4e7f-b333-81f608b5f818 became leader
	I0930 21:08:51.911877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-291511_7484d4d4-6fb4-4e7f-b333-81f608b5f818!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-291511 -n default-k8s-diff-port-291511
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-291511 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-txb2j
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-291511 describe pod metrics-server-6867b74b74-txb2j
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-291511 describe pod metrics-server-6867b74b74-txb2j: exit status 1 (63.138843ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-txb2j" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-291511 describe pod metrics-server-6867b74b74-txb2j: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (438.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (359.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256103 -n embed-certs-256103
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-30 21:28:45.394776281 +0000 UTC m=+6646.099532403
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-256103 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-256103 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.595µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-256103 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256103 -n embed-certs-256103
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-256103 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-256103 logs -n 25: (1.170014923s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-741890 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | disable-driver-mounts-741890                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 21:00 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-256103            | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-997816             | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-291511  | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-621406        | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256103                 | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-997816                  | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-291511       | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:12 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-621406             | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:27 UTC | 30 Sep 24 21:27 UTC |
	| start   | -p newest-cni-921796 --memory=2200 --alsologtostderr   | newest-cni-921796            | jenkins | v1.34.0 | 30 Sep 24 21:27 UTC | 30 Sep 24 21:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-921796             | newest-cni-921796            | jenkins | v1.34.0 | 30 Sep 24 21:28 UTC | 30 Sep 24 21:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-921796                                   | newest-cni-921796            | jenkins | v1.34.0 | 30 Sep 24 21:28 UTC | 30 Sep 24 21:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:28 UTC | 30 Sep 24 21:28 UTC |
	| addons  | enable dashboard -p newest-cni-921796                  | newest-cni-921796            | jenkins | v1.34.0 | 30 Sep 24 21:28 UTC | 30 Sep 24 21:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-921796 --memory=2200 --alsologtostderr   | newest-cni-921796            | jenkins | v1.34.0 | 30 Sep 24 21:28 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 21:28:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 21:28:29.558859   81355 out.go:345] Setting OutFile to fd 1 ...
	I0930 21:28:29.558975   81355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:28:29.558982   81355 out.go:358] Setting ErrFile to fd 2...
	I0930 21:28:29.558986   81355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:28:29.559151   81355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 21:28:29.559714   81355 out.go:352] Setting JSON to false
	I0930 21:28:29.560703   81355 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7853,"bootTime":1727723857,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 21:28:29.560798   81355 start.go:139] virtualization: kvm guest
	I0930 21:28:29.563045   81355 out.go:177] * [newest-cni-921796] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 21:28:29.564360   81355 notify.go:220] Checking for updates...
	I0930 21:28:29.564381   81355 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 21:28:29.565658   81355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 21:28:29.566901   81355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:28:29.568143   81355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 21:28:29.569360   81355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 21:28:29.570558   81355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 21:28:29.572058   81355 config.go:182] Loaded profile config "newest-cni-921796": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:28:29.572472   81355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:28:29.572535   81355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:28:29.588218   81355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43021
	I0930 21:28:29.588631   81355 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:28:29.589174   81355 main.go:141] libmachine: Using API Version  1
	I0930 21:28:29.589195   81355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:28:29.589488   81355 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:28:29.589649   81355 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:28:29.589849   81355 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 21:28:29.590124   81355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:28:29.590156   81355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:28:29.605458   81355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0930 21:28:29.605868   81355 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:28:29.606277   81355 main.go:141] libmachine: Using API Version  1
	I0930 21:28:29.606300   81355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:28:29.606643   81355 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:28:29.606843   81355 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:28:29.643495   81355 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 21:28:29.644653   81355 start.go:297] selected driver: kvm2
	I0930 21:28:29.644670   81355 start.go:901] validating driver "kvm2" against &{Name:newest-cni-921796 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-921796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:28:29.644792   81355 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 21:28:29.645576   81355 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:28:29.645668   81355 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 21:28:29.660861   81355 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 21:28:29.661262   81355 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0930 21:28:29.661288   81355 cni.go:84] Creating CNI manager for ""
	I0930 21:28:29.661333   81355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:28:29.661382   81355 start.go:340] cluster config:
	{Name:newest-cni-921796 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-921796 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:28:29.661510   81355 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:28:29.663389   81355 out.go:177] * Starting "newest-cni-921796" primary control-plane node in "newest-cni-921796" cluster
	I0930 21:28:29.664689   81355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:28:29.664735   81355 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 21:28:29.664747   81355 cache.go:56] Caching tarball of preloaded images
	I0930 21:28:29.664881   81355 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 21:28:29.664902   81355 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 21:28:29.665032   81355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/newest-cni-921796/config.json ...
	I0930 21:28:29.665254   81355 start.go:360] acquireMachinesLock for newest-cni-921796: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:28:29.665313   81355 start.go:364] duration metric: took 38.485µs to acquireMachinesLock for "newest-cni-921796"
	I0930 21:28:29.665338   81355 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:28:29.665344   81355 fix.go:54] fixHost starting: 
	I0930 21:28:29.665723   81355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:28:29.665777   81355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:28:29.681423   81355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I0930 21:28:29.681812   81355 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:28:29.682317   81355 main.go:141] libmachine: Using API Version  1
	I0930 21:28:29.682339   81355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:28:29.682671   81355 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:28:29.682875   81355 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	I0930 21:28:29.683049   81355 main.go:141] libmachine: (newest-cni-921796) Calling .GetState
	I0930 21:28:29.684786   81355 fix.go:112] recreateIfNeeded on newest-cni-921796: state=Stopped err=<nil>
	I0930 21:28:29.684821   81355 main.go:141] libmachine: (newest-cni-921796) Calling .DriverName
	W0930 21:28:29.684966   81355 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:28:29.687015   81355 out.go:177] * Restarting existing kvm2 VM for "newest-cni-921796" ...
	I0930 21:28:29.688165   81355 main.go:141] libmachine: (newest-cni-921796) Calling .Start
	I0930 21:28:29.688328   81355 main.go:141] libmachine: (newest-cni-921796) Ensuring networks are active...
	I0930 21:28:29.689157   81355 main.go:141] libmachine: (newest-cni-921796) Ensuring network default is active
	I0930 21:28:29.689514   81355 main.go:141] libmachine: (newest-cni-921796) Ensuring network mk-newest-cni-921796 is active
	I0930 21:28:29.689981   81355 main.go:141] libmachine: (newest-cni-921796) Getting domain xml...
	I0930 21:28:29.690777   81355 main.go:141] libmachine: (newest-cni-921796) Creating domain...
	I0930 21:28:30.946144   81355 main.go:141] libmachine: (newest-cni-921796) Waiting to get IP...
	I0930 21:28:30.947109   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:30.947584   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:30.947674   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:30.947561   81390 retry.go:31] will retry after 251.042431ms: waiting for machine to come up
	I0930 21:28:31.200216   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:31.200819   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:31.200848   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:31.200769   81390 retry.go:31] will retry after 366.226522ms: waiting for machine to come up
	I0930 21:28:31.568201   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:31.568704   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:31.568731   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:31.568657   81390 retry.go:31] will retry after 482.419405ms: waiting for machine to come up
	I0930 21:28:32.052302   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:32.052679   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:32.052713   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:32.052645   81390 retry.go:31] will retry after 462.599845ms: waiting for machine to come up
	I0930 21:28:32.517500   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:32.517976   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:32.518003   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:32.517939   81390 retry.go:31] will retry after 748.053277ms: waiting for machine to come up
	I0930 21:28:33.268010   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:33.268444   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:33.268473   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:33.268392   81390 retry.go:31] will retry after 838.241896ms: waiting for machine to come up
	I0930 21:28:34.108458   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:34.108907   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:34.108932   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:34.108882   81390 retry.go:31] will retry after 956.573985ms: waiting for machine to come up
	I0930 21:28:35.066602   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:35.067104   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:35.067130   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:35.067037   81390 retry.go:31] will retry after 1.244540008s: waiting for machine to come up
	I0930 21:28:36.312866   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:36.313305   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:36.313332   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:36.313245   81390 retry.go:31] will retry after 1.627867866s: waiting for machine to come up
	I0930 21:28:37.942984   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:37.943436   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:37.943459   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:37.943398   81390 retry.go:31] will retry after 1.771233504s: waiting for machine to come up
	I0930 21:28:39.716783   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:39.717345   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:39.717371   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:39.717289   81390 retry.go:31] will retry after 2.112399171s: waiting for machine to come up
	I0930 21:28:41.832273   81355 main.go:141] libmachine: (newest-cni-921796) DBG | domain newest-cni-921796 has defined MAC address 52:54:00:c8:f4:16 in network mk-newest-cni-921796
	I0930 21:28:41.832745   81355 main.go:141] libmachine: (newest-cni-921796) DBG | unable to find current IP address of domain newest-cni-921796 in network mk-newest-cni-921796
	I0930 21:28:41.832766   81355 main.go:141] libmachine: (newest-cni-921796) DBG | I0930 21:28:41.832712   81390 retry.go:31] will retry after 2.972790213s: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.025697285Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7442d92-a484-4350-a868-8aa2166b0686 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.026776720Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8d8121a-739c-48eb-842d-067a2b0e12ea name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.027223501Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731726027200618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8d8121a-739c-48eb-842d-067a2b0e12ea name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.027707346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16afbee5-26c6-4163-8c03-728b9fc220df name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.027776208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16afbee5-26c6-4163-8c03-728b9fc220df name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.028023568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d60ed05d46e64cca1db86f00ccacb45d5a95bb26b27d30f7aca439b8cc1cf701,PodSandboxId:d9540a05389856c5ab80763ded59faa352e8d4ff1a56f9942d299d7d9a60b1c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730815045416311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a07a5a12-7420-4b57-b79d-982f4bb48232,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd980ef64f5ee55e937c5a15c5227d17d60838f77fa47ac594729f27a9fd8d7,PodSandboxId:9dda41bfa3440fa3236f74a67cf60d09f954cf82d0411255da69f2d0ed0fda2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814497122204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gt5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165faaf0-866c-4097-9bdb-ed58fe8d7395,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230b9e029d92388fe72b759827e782e4da254c9ace35ca3d3e86be33515cc837,PodSandboxId:17ab0462720101799c02aa044ce3ba13798e980661c2333061d221355749afeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814424548516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sgsbn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
97fdb50-c6a0-4ef8-8c01-ea45ed18b72a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79ac99620cffe88eed23aa8ba0c4f0efba98458aa23a19a8def96edb1a7631f,PodSandboxId:8104984489a3da34604fa4aed4c224abe1ee3d1b218ba5ce5367b3352fbc7b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727730813952552871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glbsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68e378f-ce0f-4603-bd8e-93334f04f7a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a029ecee201160037c5b7802545475ebf57529e8e9145d39aab98a685b790,PodSandboxId:1064ddbe5f838121ecf09f4533a68bd2e9fe23ddd8e1f6e8f50f2c158a18dd5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730803002119690
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb914f0d7e2bbaf31e86346736a6dd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0566d21c749204134a258e8d8ac79e812d7fedb46e3c443b4403df983b45074e,PodSandboxId:6f4729ac569b3abc1e02350ad9d2c41ce5359cbeb2774c905243e1ed0d277402,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730802979
265963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405f938f252475a964680a5d44e32173,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37fa68f2a1951969df50ca55fe27f8a723f04cebab7a4758236d5733c0760cf,PodSandboxId:f0ad3931b0ae76b62980f7e56571ac517f34d9d5b713ab6942a306b61c3a26d7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730802943041907,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e92ecb8a0c3aac853211a7abd5c609e2bb75bd75908851c0c3713a3b66f3d0,PodSandboxId:93f9864dd86bff6d1c24e45c20a6ad995151ba9050eb36db50b15a6f7536fff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730802901379791,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66878a53ff8e421affd026377e49581a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c849c74a929594dad8efc1ce428cad3f9973013c4d91759cdfce50a0da6b92,PodSandboxId:e648124d4d705c3ed22d1e53880b27aa172b6d6f3b701aaf40d04875aad07cbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727730519535964178,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16afbee5-26c6-4163-8c03-728b9fc220df name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.064868447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1cf657e-3a2a-49cc-b8fc-6d6a4f670245 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.064952855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1cf657e-3a2a-49cc-b8fc-6d6a4f670245 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.066237498Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b2c611d-e709-49f7-a74b-f92a693badd3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.066623885Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731726066600114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b2c611d-e709-49f7-a74b-f92a693badd3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.067249928Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3fa8dfa-8b6a-4b04-af84-296b6d8b2ed0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.067320834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3fa8dfa-8b6a-4b04-af84-296b6d8b2ed0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.067543502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d60ed05d46e64cca1db86f00ccacb45d5a95bb26b27d30f7aca439b8cc1cf701,PodSandboxId:d9540a05389856c5ab80763ded59faa352e8d4ff1a56f9942d299d7d9a60b1c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730815045416311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a07a5a12-7420-4b57-b79d-982f4bb48232,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd980ef64f5ee55e937c5a15c5227d17d60838f77fa47ac594729f27a9fd8d7,PodSandboxId:9dda41bfa3440fa3236f74a67cf60d09f954cf82d0411255da69f2d0ed0fda2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814497122204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gt5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165faaf0-866c-4097-9bdb-ed58fe8d7395,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230b9e029d92388fe72b759827e782e4da254c9ace35ca3d3e86be33515cc837,PodSandboxId:17ab0462720101799c02aa044ce3ba13798e980661c2333061d221355749afeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814424548516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sgsbn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
97fdb50-c6a0-4ef8-8c01-ea45ed18b72a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79ac99620cffe88eed23aa8ba0c4f0efba98458aa23a19a8def96edb1a7631f,PodSandboxId:8104984489a3da34604fa4aed4c224abe1ee3d1b218ba5ce5367b3352fbc7b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727730813952552871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glbsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68e378f-ce0f-4603-bd8e-93334f04f7a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a029ecee201160037c5b7802545475ebf57529e8e9145d39aab98a685b790,PodSandboxId:1064ddbe5f838121ecf09f4533a68bd2e9fe23ddd8e1f6e8f50f2c158a18dd5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730803002119690
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb914f0d7e2bbaf31e86346736a6dd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0566d21c749204134a258e8d8ac79e812d7fedb46e3c443b4403df983b45074e,PodSandboxId:6f4729ac569b3abc1e02350ad9d2c41ce5359cbeb2774c905243e1ed0d277402,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730802979
265963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405f938f252475a964680a5d44e32173,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37fa68f2a1951969df50ca55fe27f8a723f04cebab7a4758236d5733c0760cf,PodSandboxId:f0ad3931b0ae76b62980f7e56571ac517f34d9d5b713ab6942a306b61c3a26d7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730802943041907,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e92ecb8a0c3aac853211a7abd5c609e2bb75bd75908851c0c3713a3b66f3d0,PodSandboxId:93f9864dd86bff6d1c24e45c20a6ad995151ba9050eb36db50b15a6f7536fff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730802901379791,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66878a53ff8e421affd026377e49581a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c849c74a929594dad8efc1ce428cad3f9973013c4d91759cdfce50a0da6b92,PodSandboxId:e648124d4d705c3ed22d1e53880b27aa172b6d6f3b701aaf40d04875aad07cbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727730519535964178,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3fa8dfa-8b6a-4b04-af84-296b6d8b2ed0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.099445592Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c82212a-a23a-4fee-926b-5ad2319e5540 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.099681888Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5e43a2b073e8e71a41629c76bb48e26530faecb4599fd944b8ef02bbb9c3f0dd,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-5mhkh,Uid:470424ec-bb66-4d62-904d-0d4ad93fa5bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727730815078889660,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-5mhkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 470424ec-bb66-4d62-904d-0d4ad93fa5bf,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T21:13:34.764771615Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d9540a05389856c5ab80763ded59faa352e8d4ff1a56f9942d299d7d9a60b1c7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a07a5a12-7420-4b57-b79d-982f4bb48232,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727730814856443063,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a07a5a12-7420-4b57-b79d-982f4bb48232,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-30T21:13:34.548297301Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9dda41bfa3440fa3236f74a67cf60d09f954cf82d0411255da69f2d0ed0fda2a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-gt5tt,Uid:165faaf0-866c-4097-9bdb-ed58fe8d7395,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727730813792576042,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-gt5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165faaf0-866c-4097-9bdb-ed58fe8d7395,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T21:13:33.475877271Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:17ab0462720101799c02aa044ce3ba13798e980661c2333061d221355749afeb,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-sgsbn,Uid:c97fdb50-c6a0-4ef8
-8c01-ea45ed18b72a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727730813755761606,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-sgsbn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97fdb50-c6a0-4ef8-8c01-ea45ed18b72a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T21:13:33.443302493Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8104984489a3da34604fa4aed4c224abe1ee3d1b218ba5ce5367b3352fbc7b52,Metadata:&PodSandboxMetadata{Name:kube-proxy-glbsg,Uid:f68e378f-ce0f-4603-bd8e-93334f04f7a7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727730813688896238,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-glbsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68e378f-ce0f-4603-bd8e-93334f04f7a7,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T21:13:33.375320858Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6f4729ac569b3abc1e02350ad9d2c41ce5359cbeb2774c905243e1ed0d277402,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-256103,Uid:405f938f252475a964680a5d44e32173,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727730802768784689,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405f938f252475a964680a5d44e32173,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 405f938f252475a964680a5d44e32173,kubernetes.io/config.seen: 2024-09-30T21:13:22.303739001Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1064ddbe5f838121ecf09f4533a68bd2e9fe23ddd8e1f6e8f50f2c158a18dd5c,Metadata:&PodSandboxMetadata{Name:kube-controlle
r-manager-embed-certs-256103,Uid:60cb914f0d7e2bbaf31e86346736a6dd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727730802751190774,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb914f0d7e2bbaf31e86346736a6dd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 60cb914f0d7e2bbaf31e86346736a6dd,kubernetes.io/config.seen: 2024-09-30T21:13:22.303737986Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f0ad3931b0ae76b62980f7e56571ac517f34d9d5b713ab6942a306b61c3a26d7,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-256103,Uid:e026db1de1b360d400383807119e0f42,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727730802746983986,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver
-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.90:8443,kubernetes.io/config.hash: e026db1de1b360d400383807119e0f42,kubernetes.io/config.seen: 2024-09-30T21:13:22.303736753Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:93f9864dd86bff6d1c24e45c20a6ad995151ba9050eb36db50b15a6f7536fff2,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-256103,Uid:66878a53ff8e421affd026377e49581a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727730802743777569,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66878a53ff8e421affd026377e49581a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39
.90:2379,kubernetes.io/config.hash: 66878a53ff8e421affd026377e49581a,kubernetes.io/config.seen: 2024-09-30T21:13:22.303733089Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0c82212a-a23a-4fee-926b-5ad2319e5540 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.100429470Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e83c9b2e-6f11-40f6-8b67-7eae069bb2a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.100489253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e83c9b2e-6f11-40f6-8b67-7eae069bb2a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.100672155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d60ed05d46e64cca1db86f00ccacb45d5a95bb26b27d30f7aca439b8cc1cf701,PodSandboxId:d9540a05389856c5ab80763ded59faa352e8d4ff1a56f9942d299d7d9a60b1c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730815045416311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a07a5a12-7420-4b57-b79d-982f4bb48232,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd980ef64f5ee55e937c5a15c5227d17d60838f77fa47ac594729f27a9fd8d7,PodSandboxId:9dda41bfa3440fa3236f74a67cf60d09f954cf82d0411255da69f2d0ed0fda2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814497122204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gt5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165faaf0-866c-4097-9bdb-ed58fe8d7395,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230b9e029d92388fe72b759827e782e4da254c9ace35ca3d3e86be33515cc837,PodSandboxId:17ab0462720101799c02aa044ce3ba13798e980661c2333061d221355749afeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814424548516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sgsbn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
97fdb50-c6a0-4ef8-8c01-ea45ed18b72a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79ac99620cffe88eed23aa8ba0c4f0efba98458aa23a19a8def96edb1a7631f,PodSandboxId:8104984489a3da34604fa4aed4c224abe1ee3d1b218ba5ce5367b3352fbc7b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727730813952552871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glbsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68e378f-ce0f-4603-bd8e-93334f04f7a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a029ecee201160037c5b7802545475ebf57529e8e9145d39aab98a685b790,PodSandboxId:1064ddbe5f838121ecf09f4533a68bd2e9fe23ddd8e1f6e8f50f2c158a18dd5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730803002119690
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb914f0d7e2bbaf31e86346736a6dd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0566d21c749204134a258e8d8ac79e812d7fedb46e3c443b4403df983b45074e,PodSandboxId:6f4729ac569b3abc1e02350ad9d2c41ce5359cbeb2774c905243e1ed0d277402,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730802979
265963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405f938f252475a964680a5d44e32173,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37fa68f2a1951969df50ca55fe27f8a723f04cebab7a4758236d5733c0760cf,PodSandboxId:f0ad3931b0ae76b62980f7e56571ac517f34d9d5b713ab6942a306b61c3a26d7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730802943041907,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e92ecb8a0c3aac853211a7abd5c609e2bb75bd75908851c0c3713a3b66f3d0,PodSandboxId:93f9864dd86bff6d1c24e45c20a6ad995151ba9050eb36db50b15a6f7536fff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730802901379791,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66878a53ff8e421affd026377e49581a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e83c9b2e-6f11-40f6-8b67-7eae069bb2a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.105867486Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=901f58c1-9e19-416a-9325-43bbad3b8b86 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.105933906Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=901f58c1-9e19-416a-9325-43bbad3b8b86 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.107098288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54fb3fd0-8ff4-4669-ba65-307accd9bcc1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.107472616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731726107453085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54fb3fd0-8ff4-4669-ba65-307accd9bcc1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.108036412Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b5d098c-100e-45e4-9ae5-14033c324242 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.108096239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b5d098c-100e-45e4-9ae5-14033c324242 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:28:46 embed-certs-256103 crio[700]: time="2024-09-30 21:28:46.108273722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d60ed05d46e64cca1db86f00ccacb45d5a95bb26b27d30f7aca439b8cc1cf701,PodSandboxId:d9540a05389856c5ab80763ded59faa352e8d4ff1a56f9942d299d7d9a60b1c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727730815045416311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a07a5a12-7420-4b57-b79d-982f4bb48232,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd980ef64f5ee55e937c5a15c5227d17d60838f77fa47ac594729f27a9fd8d7,PodSandboxId:9dda41bfa3440fa3236f74a67cf60d09f954cf82d0411255da69f2d0ed0fda2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814497122204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gt5tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165faaf0-866c-4097-9bdb-ed58fe8d7395,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230b9e029d92388fe72b759827e782e4da254c9ace35ca3d3e86be33515cc837,PodSandboxId:17ab0462720101799c02aa044ce3ba13798e980661c2333061d221355749afeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727730814424548516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sgsbn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
97fdb50-c6a0-4ef8-8c01-ea45ed18b72a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79ac99620cffe88eed23aa8ba0c4f0efba98458aa23a19a8def96edb1a7631f,PodSandboxId:8104984489a3da34604fa4aed4c224abe1ee3d1b218ba5ce5367b3352fbc7b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727730813952552871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glbsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68e378f-ce0f-4603-bd8e-93334f04f7a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a029ecee201160037c5b7802545475ebf57529e8e9145d39aab98a685b790,PodSandboxId:1064ddbe5f838121ecf09f4533a68bd2e9fe23ddd8e1f6e8f50f2c158a18dd5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727730803002119690
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb914f0d7e2bbaf31e86346736a6dd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0566d21c749204134a258e8d8ac79e812d7fedb46e3c443b4403df983b45074e,PodSandboxId:6f4729ac569b3abc1e02350ad9d2c41ce5359cbeb2774c905243e1ed0d277402,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727730802979
265963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405f938f252475a964680a5d44e32173,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37fa68f2a1951969df50ca55fe27f8a723f04cebab7a4758236d5733c0760cf,PodSandboxId:f0ad3931b0ae76b62980f7e56571ac517f34d9d5b713ab6942a306b61c3a26d7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727730802943041907,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e92ecb8a0c3aac853211a7abd5c609e2bb75bd75908851c0c3713a3b66f3d0,PodSandboxId:93f9864dd86bff6d1c24e45c20a6ad995151ba9050eb36db50b15a6f7536fff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727730802901379791,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66878a53ff8e421affd026377e49581a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c849c74a929594dad8efc1ce428cad3f9973013c4d91759cdfce50a0da6b92,PodSandboxId:e648124d4d705c3ed22d1e53880b27aa172b6d6f3b701aaf40d04875aad07cbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727730519535964178,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-256103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e026db1de1b360d400383807119e0f42,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b5d098c-100e-45e4-9ae5-14033c324242 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d60ed05d46e64       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   d9540a0538985       storage-provisioner
	4bd980ef64f5e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   9dda41bfa3440       coredns-7c65d6cfc9-gt5tt
	230b9e029d923       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   17ab046272010       coredns-7c65d6cfc9-sgsbn
	b79ac99620cff       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   15 minutes ago      Running             kube-proxy                0                   8104984489a3d       kube-proxy-glbsg
	499a029ecee20       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   15 minutes ago      Running             kube-controller-manager   2                   1064ddbe5f838       kube-controller-manager-embed-certs-256103
	0566d21c74920       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   15 minutes ago      Running             kube-scheduler            2                   6f4729ac569b3       kube-scheduler-embed-certs-256103
	e37fa68f2a195       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   15 minutes ago      Running             kube-apiserver            2                   f0ad3931b0ae7       kube-apiserver-embed-certs-256103
	47e92ecb8a0c3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   93f9864dd86bf       etcd-embed-certs-256103
	c7c849c74a929       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   20 minutes ago      Exited              kube-apiserver            1                   e648124d4d705       kube-apiserver-embed-certs-256103
	
	
	==> coredns [230b9e029d92388fe72b759827e782e4da254c9ace35ca3d3e86be33515cc837] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [4bd980ef64f5ee55e937c5a15c5227d17d60838f77fa47ac594729f27a9fd8d7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-256103
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-256103
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022
	                    minikube.k8s.io/name=embed-certs-256103
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T21_13_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 21:13:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-256103
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 21:28:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 21:23:51 +0000   Mon, 30 Sep 2024 21:13:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 21:23:51 +0000   Mon, 30 Sep 2024 21:13:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 21:23:51 +0000   Mon, 30 Sep 2024 21:13:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 21:23:51 +0000   Mon, 30 Sep 2024 21:13:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.90
	  Hostname:    embed-certs-256103
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 069094f552e54029b7b56481eecb511b
	  System UUID:                069094f5-52e5-4029-b7b5-6481eecb511b
	  Boot ID:                    6b70f5e5-835e-4ab7-b9c6-cdf339ee44dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gt5tt                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-sgsbn                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-256103                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-256103             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-256103    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-glbsg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-256103             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-5mhkh               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-256103 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-256103 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-256103 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-256103 event: Registered Node embed-certs-256103 in Controller
	
	
	==> dmesg <==
	[  +0.053525] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042751] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.143984] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968643] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.569015] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.123461] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.068083] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054663] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.196500] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.111848] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.269488] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[  +4.058827] systemd-fstab-generator[783]: Ignoring "noauto" option for root device
	[  +1.866348] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +0.080396] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.531169] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.299145] kauditd_printk_skb: 85 callbacks suppressed
	[Sep30 21:13] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.179514] systemd-fstab-generator[2556]: Ignoring "noauto" option for root device
	[  +4.574300] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.474913] systemd-fstab-generator[2879]: Ignoring "noauto" option for root device
	[  +5.363641] systemd-fstab-generator[2989]: Ignoring "noauto" option for root device
	[  +0.116529] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.311974] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [47e92ecb8a0c3aac853211a7abd5c609e2bb75bd75908851c0c3713a3b66f3d0] <==
	{"level":"info","ts":"2024-09-30T21:13:23.780197Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8d381aaacda0b9bd","local-member-attributes":"{Name:embed-certs-256103 ClientURLs:[https://192.168.39.90:2379]}","request-path":"/0/members/8d381aaacda0b9bd/attributes","cluster-id":"8cf3a1558a63fa9e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T21:13:23.780282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T21:13:23.780373Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T21:13:23.783390Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T21:13:23.784883Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T21:13:23.799381Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T21:13:23.799419Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T21:13:23.799514Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8cf3a1558a63fa9e","local-member-id":"8d381aaacda0b9bd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T21:13:23.802981Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T21:13:23.803760Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.90:2379"}
	{"level":"info","ts":"2024-09-30T21:13:23.805349Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T21:13:23.805614Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T21:13:23.816610Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T21:23:23.907377Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":690}
	{"level":"info","ts":"2024-09-30T21:23:23.918300Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":690,"took":"10.516793ms","hash":548279665,"current-db-size-bytes":2379776,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2379776,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-09-30T21:23:23.918373Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":548279665,"revision":690,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-30T21:28:03.174674Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.789399ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T21:28:03.174937Z","caller":"traceutil/trace.go:171","msg":"trace[1524377266] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1158; }","duration":"190.110009ms","start":"2024-09-30T21:28:02.984794Z","end":"2024-09-30T21:28:03.174904Z","steps":["trace[1524377266] 'range keys from in-memory index tree'  (duration: 189.774585ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:28:03.175256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.665686ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13384014491723889493 > lease_revoke:<id:39bd9244c7e1faf5>","response":"size:29"}
	{"level":"info","ts":"2024-09-30T21:28:03.970036Z","caller":"traceutil/trace.go:171","msg":"trace[1992068484] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"276.406584ms","start":"2024-09-30T21:28:03.693607Z","end":"2024-09-30T21:28:03.970014Z","steps":["trace[1992068484] 'process raft request'  (duration: 276.296738ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T21:28:04.128291Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.911195ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T21:28:04.128444Z","caller":"traceutil/trace.go:171","msg":"trace[367439867] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1159; }","duration":"143.112739ms","start":"2024-09-30T21:28:03.985317Z","end":"2024-09-30T21:28:04.128430Z","steps":["trace[367439867] 'range keys from in-memory index tree'  (duration: 142.896218ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T21:28:23.914928Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":932}
	{"level":"info","ts":"2024-09-30T21:28:23.919328Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":932,"took":"3.561811ms","hash":1019717444,"current-db-size-bytes":2379776,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1605632,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-30T21:28:23.919482Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1019717444,"revision":932,"compact-revision":690}
	
	
	==> kernel <==
	 21:28:46 up 20 min,  0 users,  load average: 0.19, 0.22, 0.18
	Linux embed-certs-256103 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c7c849c74a929594dad8efc1ce428cad3f9973013c4d91759cdfce50a0da6b92] <==
	W0930 21:13:19.481084       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.542030       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.571452       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.574854       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.626591       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.741079       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.745515       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.775677       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.775990       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.785400       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.799269       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.811074       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.849230       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.854691       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:19.889900       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.039620       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.099688       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.103199       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.112709       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.211231       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.307726       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.330453       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.390497       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.481076       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 21:13:20.489947       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e37fa68f2a1951969df50ca55fe27f8a723f04cebab7a4758236d5733c0760cf] <==
	I0930 21:24:26.528626       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:24:26.528680       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 21:26:26.529548       1 handler_proxy.go:99] no RequestInfo found in the context
	W0930 21:26:26.529734       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:26:26.529947       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0930 21:26:26.530001       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0930 21:26:26.532085       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:26:26.532166       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 21:28:25.530527       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:28:25.530935       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0930 21:28:26.533111       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:28:26.533244       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0930 21:28:26.533111       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 21:28:26.533397       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0930 21:28:26.534537       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 21:28:26.534582       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [499a029ecee201160037c5b7802545475ebf57529e8e9145d39aab98a685b790] <==
	E0930 21:23:32.649752       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:23:33.117366       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0930 21:23:51.374522       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-256103"
	E0930 21:24:02.657551       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:24:03.125032       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:24:32.664219       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:24:33.133539       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0930 21:24:45.116536       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="186.029µs"
	I0930 21:25:00.116833       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="68.042µs"
	E0930 21:25:02.670304       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:25:03.141034       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:25:32.677250       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:25:33.148270       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:26:02.684210       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:26:03.157360       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:26:32.691133       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:26:33.166486       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:27:02.697328       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:27:03.174787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:27:32.704562       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:27:33.185069       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:28:02.711767       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:28:03.195912       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0930 21:28:32.717748       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0930 21:28:33.205373       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b79ac99620cffe88eed23aa8ba0c4f0efba98458aa23a19a8def96edb1a7631f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 21:13:34.692949       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 21:13:34.739604       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.90"]
	E0930 21:13:34.739712       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 21:13:34.964541       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 21:13:34.964597       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 21:13:34.964626       1 server_linux.go:169] "Using iptables Proxier"
	I0930 21:13:34.969048       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 21:13:34.969368       1 server.go:483] "Version info" version="v1.31.1"
	I0930 21:13:34.969410       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 21:13:34.971670       1 config.go:199] "Starting service config controller"
	I0930 21:13:34.971757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 21:13:34.971856       1 config.go:105] "Starting endpoint slice config controller"
	I0930 21:13:34.971879       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 21:13:34.972423       1 config.go:328] "Starting node config controller"
	I0930 21:13:34.975846       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 21:13:35.072454       1 shared_informer.go:320] Caches are synced for service config
	I0930 21:13:35.072550       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 21:13:35.076301       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0566d21c749204134a258e8d8ac79e812d7fedb46e3c443b4403df983b45074e] <==
	W0930 21:13:25.628460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 21:13:25.631122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:25.633862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 21:13:25.633943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:25.636114       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 21:13:25.636211       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 21:13:26.539515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 21:13:26.539715       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.556968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0930 21:13:26.557013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.580495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 21:13:26.581015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.606968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 21:13:26.607019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.621661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 21:13:26.621734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.690633       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0930 21:13:26.690677       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.806531       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 21:13:26.806601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.889148       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 21:13:26.889204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 21:13:26.912912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 21:13:26.912957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 21:13:27.325934       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 21:27:32 embed-certs-256103 kubelet[2886]: E0930 21:27:32.100150    2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5mhkh" podUID="470424ec-bb66-4d62-904d-0d4ad93fa5bf"
	Sep 30 21:27:38 embed-certs-256103 kubelet[2886]: E0930 21:27:38.335989    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731658335615617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:38 embed-certs-256103 kubelet[2886]: E0930 21:27:38.336060    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731658335615617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:44 embed-certs-256103 kubelet[2886]: E0930 21:27:44.100773    2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5mhkh" podUID="470424ec-bb66-4d62-904d-0d4ad93fa5bf"
	Sep 30 21:27:48 embed-certs-256103 kubelet[2886]: E0930 21:27:48.338772    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731668337993041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:48 embed-certs-256103 kubelet[2886]: E0930 21:27:48.338876    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731668337993041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:57 embed-certs-256103 kubelet[2886]: E0930 21:27:57.101081    2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5mhkh" podUID="470424ec-bb66-4d62-904d-0d4ad93fa5bf"
	Sep 30 21:27:58 embed-certs-256103 kubelet[2886]: E0930 21:27:58.341482    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731678340605069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:27:58 embed-certs-256103 kubelet[2886]: E0930 21:27:58.341910    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731678340605069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:08 embed-certs-256103 kubelet[2886]: E0930 21:28:08.101585    2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5mhkh" podUID="470424ec-bb66-4d62-904d-0d4ad93fa5bf"
	Sep 30 21:28:08 embed-certs-256103 kubelet[2886]: E0930 21:28:08.343553    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731688342940549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:08 embed-certs-256103 kubelet[2886]: E0930 21:28:08.343665    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731688342940549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:18 embed-certs-256103 kubelet[2886]: E0930 21:28:18.345752    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731698345474786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:18 embed-certs-256103 kubelet[2886]: E0930 21:28:18.345777    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731698345474786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:23 embed-certs-256103 kubelet[2886]: E0930 21:28:23.100475    2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5mhkh" podUID="470424ec-bb66-4d62-904d-0d4ad93fa5bf"
	Sep 30 21:28:28 embed-certs-256103 kubelet[2886]: E0930 21:28:28.123279    2886 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 21:28:28 embed-certs-256103 kubelet[2886]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 21:28:28 embed-certs-256103 kubelet[2886]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 21:28:28 embed-certs-256103 kubelet[2886]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 21:28:28 embed-certs-256103 kubelet[2886]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 21:28:28 embed-certs-256103 kubelet[2886]: E0930 21:28:28.347550    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731708347292124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:28 embed-certs-256103 kubelet[2886]: E0930 21:28:28.347685    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731708347292124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:37 embed-certs-256103 kubelet[2886]: E0930 21:28:37.100697    2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5mhkh" podUID="470424ec-bb66-4d62-904d-0d4ad93fa5bf"
	Sep 30 21:28:38 embed-certs-256103 kubelet[2886]: E0930 21:28:38.352595    2886 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731718349015018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 21:28:38 embed-certs-256103 kubelet[2886]: E0930 21:28:38.353346    2886 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731718349015018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [d60ed05d46e64cca1db86f00ccacb45d5a95bb26b27d30f7aca439b8cc1cf701] <==
	I0930 21:13:35.169001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 21:13:35.203240       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 21:13:35.203310       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 21:13:35.239188       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 21:13:35.239352       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-256103_8a7d20c6-199a-4fca-a63b-d33200502e8e!
	I0930 21:13:35.244638       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"94d1c1b3-3132-464e-ae13-9d6b20a67810", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-256103_8a7d20c6-199a-4fca-a63b-d33200502e8e became leader
	I0930 21:13:35.339911       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-256103_8a7d20c6-199a-4fca-a63b-d33200502e8e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256103 -n embed-certs-256103
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-256103 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-5mhkh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-256103 describe pod metrics-server-6867b74b74-5mhkh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-256103 describe pod metrics-server-6867b74b74-5mhkh: exit status 1 (66.285888ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-5mhkh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-256103 describe pod metrics-server-6867b74b74-5mhkh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (359.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (127.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:25:52.286340   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:25:55.311110   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:26:32.007118   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:26:34.590590   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
E0930 21:26:55.759692   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/custom-flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.159:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-621406 -n old-k8s-version-621406
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-621406 -n old-k8s-version-621406: exit status 2 (219.013773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-621406" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-621406 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-621406 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.107µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-621406 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406: exit status 2 (219.062374ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-621406 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-621406 logs -n 25: (1.618299523s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo                                 | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo find                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-207733 sudo crio                            | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-207733                                      | flannel-207733               | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-741890 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 20:58 UTC |
	|         | disable-driver-mounts-741890                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 20:58 UTC | 30 Sep 24 21:00 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-256103            | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-997816             | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-291511  | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC | 30 Sep 24 21:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:00 UTC |                     |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-621406        | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256103                 | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256103                                  | embed-certs-256103           | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-997816                  | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-997816                                   | no-preload-997816            | jenkins | v1.34.0 | 30 Sep 24 21:02 UTC | 30 Sep 24 21:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-291511       | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-291511 | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:12 UTC |
	|         | default-k8s-diff-port-291511                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-621406             | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC | 30 Sep 24 21:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-621406                              | old-k8s-version-621406       | jenkins | v1.34.0 | 30 Sep 24 21:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 21:03:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 21:03:42.750102   73900 out.go:345] Setting OutFile to fd 1 ...
	I0930 21:03:42.750367   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750377   73900 out.go:358] Setting ErrFile to fd 2...
	I0930 21:03:42.750383   73900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 21:03:42.750578   73900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 21:03:42.751109   73900 out.go:352] Setting JSON to false
	I0930 21:03:42.752040   73900 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6366,"bootTime":1727723857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 21:03:42.752140   73900 start.go:139] virtualization: kvm guest
	I0930 21:03:42.754146   73900 out.go:177] * [old-k8s-version-621406] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 21:03:42.755446   73900 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 21:03:42.755456   73900 notify.go:220] Checking for updates...
	I0930 21:03:42.758261   73900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 21:03:42.759566   73900 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:03:42.760907   73900 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 21:03:42.762342   73900 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 21:03:42.763561   73900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 21:03:42.765356   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:03:42.765773   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.765822   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.780605   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45071
	I0930 21:03:42.781022   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.781550   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.781583   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.781912   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.782160   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.784603   73900 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 21:03:42.785760   73900 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 21:03:42.786115   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:03:42.786156   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:03:42.800937   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0930 21:03:42.801409   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:03:42.801882   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:03:42.801905   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:03:42.802216   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:03:42.802397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:03:42.838423   73900 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 21:03:42.839832   73900 start.go:297] selected driver: kvm2
	I0930 21:03:42.839847   73900 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.839953   73900 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 21:03:42.840605   73900 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.840667   73900 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 21:03:42.856119   73900 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 21:03:42.856550   73900 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:03:42.856580   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:03:42.856630   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:03:42.856665   73900 start.go:340] cluster config:
	{Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:03:42.856778   73900 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 21:03:42.858732   73900 out.go:177] * Starting "old-k8s-version-621406" primary control-plane node in "old-k8s-version-621406" cluster
	I0930 21:03:42.859876   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:03:42.859912   73900 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 21:03:42.859929   73900 cache.go:56] Caching tarball of preloaded images
	I0930 21:03:42.860020   73900 preload.go:172] Found /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 21:03:42.860031   73900 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0930 21:03:42.860153   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:03:42.860340   73900 start.go:360] acquireMachinesLock for old-k8s-version-621406: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:03:44.619810   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:47.691872   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:53.771838   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:03:56.843848   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:02.923822   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:05.995871   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:12.075814   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:15.147854   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:21.227790   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:24.299842   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:30.379801   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:33.451787   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:39.531808   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:42.603838   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:48.683904   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:51.755939   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:04:57.835834   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:00.907789   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:06.987875   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:10.059892   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:16.139832   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:19.211908   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:25.291812   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:28.363915   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:34.443827   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:37.515928   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:43.595824   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:46.667934   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:52.747851   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:05:55.819883   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:01.899789   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:04.971946   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:11.051812   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:14.123833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:20.203805   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:23.275875   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:29.355806   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:32.427931   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:38.507837   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:41.579909   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:47.659786   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:50.731827   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:56.811833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:06:59.883878   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:05.963833   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:09.035828   73256 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.90:22: connect: no route to host
	I0930 21:07:12.040058   73375 start.go:364] duration metric: took 4m26.951572628s to acquireMachinesLock for "no-preload-997816"
	I0930 21:07:12.040115   73375 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:12.040126   73375 fix.go:54] fixHost starting: 
	I0930 21:07:12.040448   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:12.040485   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:12.057054   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0930 21:07:12.057624   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:12.058143   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:12.058173   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:12.058523   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:12.058739   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:12.058873   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:12.060479   73375 fix.go:112] recreateIfNeeded on no-preload-997816: state=Stopped err=<nil>
	I0930 21:07:12.060499   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	W0930 21:07:12.060640   73375 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:12.062653   73375 out.go:177] * Restarting existing kvm2 VM for "no-preload-997816" ...
	I0930 21:07:12.037683   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:12.037732   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:07:12.038031   73256 buildroot.go:166] provisioning hostname "embed-certs-256103"
	I0930 21:07:12.038055   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:07:12.038234   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:07:12.039910   73256 machine.go:96] duration metric: took 4m37.42208497s to provisionDockerMachine
	I0930 21:07:12.039954   73256 fix.go:56] duration metric: took 4m37.444804798s for fixHost
	I0930 21:07:12.039962   73256 start.go:83] releasing machines lock for "embed-certs-256103", held for 4m37.444833727s
	W0930 21:07:12.039989   73256 start.go:714] error starting host: provision: host is not running
	W0930 21:07:12.040104   73256 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0930 21:07:12.040116   73256 start.go:729] Will try again in 5 seconds ...
	I0930 21:07:12.063941   73375 main.go:141] libmachine: (no-preload-997816) Calling .Start
	I0930 21:07:12.064167   73375 main.go:141] libmachine: (no-preload-997816) Ensuring networks are active...
	I0930 21:07:12.065080   73375 main.go:141] libmachine: (no-preload-997816) Ensuring network default is active
	I0930 21:07:12.065489   73375 main.go:141] libmachine: (no-preload-997816) Ensuring network mk-no-preload-997816 is active
	I0930 21:07:12.065993   73375 main.go:141] libmachine: (no-preload-997816) Getting domain xml...
	I0930 21:07:12.066923   73375 main.go:141] libmachine: (no-preload-997816) Creating domain...
	I0930 21:07:13.297091   73375 main.go:141] libmachine: (no-preload-997816) Waiting to get IP...
	I0930 21:07:13.297965   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.298386   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.298473   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.298370   74631 retry.go:31] will retry after 312.032565ms: waiting for machine to come up
	I0930 21:07:13.612088   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.612583   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.612607   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.612519   74631 retry.go:31] will retry after 292.985742ms: waiting for machine to come up
	I0930 21:07:13.907355   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:13.907794   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:13.907817   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:13.907754   74631 retry.go:31] will retry after 451.618632ms: waiting for machine to come up
	I0930 21:07:14.361536   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:14.361990   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:14.362054   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:14.361947   74631 retry.go:31] will retry after 599.246635ms: waiting for machine to come up
	I0930 21:07:14.962861   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:14.963341   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:14.963369   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:14.963294   74631 retry.go:31] will retry after 748.726096ms: waiting for machine to come up
	I0930 21:07:17.040758   73256 start.go:360] acquireMachinesLock for embed-certs-256103: {Name:mk0cb470c08a5d9f26fdc9dfd10f07b5493d04dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 21:07:15.713258   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:15.713576   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:15.713601   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:15.713525   74631 retry.go:31] will retry after 907.199669ms: waiting for machine to come up
	I0930 21:07:16.622784   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:16.623275   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:16.623307   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:16.623211   74631 retry.go:31] will retry after 744.978665ms: waiting for machine to come up
	I0930 21:07:17.369735   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:17.370206   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:17.370231   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:17.370154   74631 retry.go:31] will retry after 1.238609703s: waiting for machine to come up
	I0930 21:07:18.610618   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:18.610967   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:18.610989   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:18.610928   74631 retry.go:31] will retry after 1.354775356s: waiting for machine to come up
	I0930 21:07:19.967473   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:19.967892   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:19.967916   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:19.967851   74631 retry.go:31] will retry after 2.26449082s: waiting for machine to come up
	I0930 21:07:22.234066   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:22.234514   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:22.234536   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:22.234474   74631 retry.go:31] will retry after 2.728158374s: waiting for machine to come up
	I0930 21:07:24.966375   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:24.966759   73375 main.go:141] libmachine: (no-preload-997816) DBG | unable to find current IP address of domain no-preload-997816 in network mk-no-preload-997816
	I0930 21:07:24.966782   73375 main.go:141] libmachine: (no-preload-997816) DBG | I0930 21:07:24.966724   74631 retry.go:31] will retry after 3.119117729s: waiting for machine to come up
	I0930 21:07:29.336238   73707 start.go:364] duration metric: took 3m58.92874513s to acquireMachinesLock for "default-k8s-diff-port-291511"
	I0930 21:07:29.336327   73707 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:29.336347   73707 fix.go:54] fixHost starting: 
	I0930 21:07:29.336726   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:29.336779   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:29.354404   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0930 21:07:29.354848   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:29.355331   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:07:29.355352   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:29.355882   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:29.356081   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:29.356249   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:07:29.358109   73707 fix.go:112] recreateIfNeeded on default-k8s-diff-port-291511: state=Stopped err=<nil>
	I0930 21:07:29.358155   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	W0930 21:07:29.358336   73707 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:29.361072   73707 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-291511" ...
	I0930 21:07:28.087153   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.087604   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has current primary IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.087636   73375 main.go:141] libmachine: (no-preload-997816) Found IP for machine: 192.168.61.93
	I0930 21:07:28.087644   73375 main.go:141] libmachine: (no-preload-997816) Reserving static IP address...
	I0930 21:07:28.088047   73375 main.go:141] libmachine: (no-preload-997816) Reserved static IP address: 192.168.61.93
	I0930 21:07:28.088068   73375 main.go:141] libmachine: (no-preload-997816) Waiting for SSH to be available...
	I0930 21:07:28.088090   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "no-preload-997816", mac: "52:54:00:cb:3d:73", ip: "192.168.61.93"} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.088158   73375 main.go:141] libmachine: (no-preload-997816) DBG | skip adding static IP to network mk-no-preload-997816 - found existing host DHCP lease matching {name: "no-preload-997816", mac: "52:54:00:cb:3d:73", ip: "192.168.61.93"}
	I0930 21:07:28.088181   73375 main.go:141] libmachine: (no-preload-997816) DBG | Getting to WaitForSSH function...
	I0930 21:07:28.090195   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.090522   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.090547   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.090722   73375 main.go:141] libmachine: (no-preload-997816) DBG | Using SSH client type: external
	I0930 21:07:28.090739   73375 main.go:141] libmachine: (no-preload-997816) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa (-rw-------)
	I0930 21:07:28.090767   73375 main.go:141] libmachine: (no-preload-997816) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.93 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:07:28.090787   73375 main.go:141] libmachine: (no-preload-997816) DBG | About to run SSH command:
	I0930 21:07:28.090801   73375 main.go:141] libmachine: (no-preload-997816) DBG | exit 0
	I0930 21:07:28.211669   73375 main.go:141] libmachine: (no-preload-997816) DBG | SSH cmd err, output: <nil>: 
	I0930 21:07:28.212073   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetConfigRaw
	I0930 21:07:28.212714   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:28.215442   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.215934   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.215951   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.216186   73375 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/config.json ...
	I0930 21:07:28.216370   73375 machine.go:93] provisionDockerMachine start ...
	I0930 21:07:28.216386   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:28.216575   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.218963   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.219423   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.219455   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.219604   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.219770   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.219948   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.220057   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.220252   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.220441   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.220452   73375 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:07:28.315814   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:07:28.315853   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.316131   73375 buildroot.go:166] provisioning hostname "no-preload-997816"
	I0930 21:07:28.316161   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.316372   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.319253   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.319506   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.319548   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.319711   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.319903   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.320057   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.320182   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.320383   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.320592   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.320606   73375 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-997816 && echo "no-preload-997816" | sudo tee /etc/hostname
	I0930 21:07:28.433652   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-997816
	
	I0930 21:07:28.433686   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.436989   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.437350   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.437389   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.437611   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.437784   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.437957   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.438075   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.438267   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.438487   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.438512   73375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-997816' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-997816/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-997816' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:07:28.544056   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:28.544088   73375 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:07:28.544112   73375 buildroot.go:174] setting up certificates
	I0930 21:07:28.544122   73375 provision.go:84] configureAuth start
	I0930 21:07:28.544135   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetMachineName
	I0930 21:07:28.544418   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:28.546960   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.547363   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.547384   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.547570   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.549918   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.550325   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.550353   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.550535   73375 provision.go:143] copyHostCerts
	I0930 21:07:28.550612   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:07:28.550627   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:07:28.550711   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:07:28.550804   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:07:28.550812   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:07:28.550837   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:07:28.550893   73375 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:07:28.550900   73375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:07:28.550920   73375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:07:28.550967   73375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.no-preload-997816 san=[127.0.0.1 192.168.61.93 localhost minikube no-preload-997816]
	I0930 21:07:28.744306   73375 provision.go:177] copyRemoteCerts
	I0930 21:07:28.744364   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:07:28.744386   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.747024   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.747368   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.747401   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.747615   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.747813   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.747973   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.748133   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:28.825616   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0930 21:07:28.849513   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:07:28.872666   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:07:28.895673   73375 provision.go:87] duration metric: took 351.536833ms to configureAuth
	I0930 21:07:28.895708   73375 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:07:28.895896   73375 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:28.895975   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:28.898667   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.899067   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:28.899098   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:28.899324   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:28.899567   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.899703   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:28.899829   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:28.899946   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:28.900120   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:28.900134   73375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:07:29.113855   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:07:29.113877   73375 machine.go:96] duration metric: took 897.495238ms to provisionDockerMachine
	I0930 21:07:29.113887   73375 start.go:293] postStartSetup for "no-preload-997816" (driver="kvm2")
	I0930 21:07:29.113897   73375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:07:29.113921   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.114220   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:07:29.114254   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.117274   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.117619   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.117663   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.117816   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.118010   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.118159   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.118289   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.197962   73375 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:07:29.202135   73375 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:07:29.202166   73375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:07:29.202237   73375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:07:29.202321   73375 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:07:29.202406   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:07:29.211693   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:29.234503   73375 start.go:296] duration metric: took 120.601484ms for postStartSetup
	I0930 21:07:29.234582   73375 fix.go:56] duration metric: took 17.194433455s for fixHost
	I0930 21:07:29.234610   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.237134   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.237544   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.237574   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.237728   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.237912   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.238085   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.238199   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.238348   73375 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:29.238506   73375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0930 21:07:29.238515   73375 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:07:29.336092   73375 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730449.310327649
	
	I0930 21:07:29.336114   73375 fix.go:216] guest clock: 1727730449.310327649
	I0930 21:07:29.336123   73375 fix.go:229] Guest: 2024-09-30 21:07:29.310327649 +0000 UTC Remote: 2024-09-30 21:07:29.234588814 +0000 UTC m=+284.288095935 (delta=75.738835ms)
	I0930 21:07:29.336147   73375 fix.go:200] guest clock delta is within tolerance: 75.738835ms
	I0930 21:07:29.336153   73375 start.go:83] releasing machines lock for "no-preload-997816", held for 17.296055752s
	I0930 21:07:29.336194   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.336478   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:29.339488   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.339864   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.339909   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.340070   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340525   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340697   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:29.340800   73375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:07:29.340836   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.340930   73375 ssh_runner.go:195] Run: cat /version.json
	I0930 21:07:29.340955   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:29.343579   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.343941   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.343976   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344010   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344228   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.344405   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.344441   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:29.344471   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:29.344543   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.344616   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:29.344689   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.344784   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:29.344966   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:29.345105   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:29.420949   73375 ssh_runner.go:195] Run: systemctl --version
	I0930 21:07:29.465854   73375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:07:29.616360   73375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:07:29.624522   73375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:07:29.624604   73375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:07:29.642176   73375 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:07:29.642202   73375 start.go:495] detecting cgroup driver to use...
	I0930 21:07:29.642279   73375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:07:29.657878   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:07:29.674555   73375 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:07:29.674614   73375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:07:29.690953   73375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:07:29.705425   73375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:07:29.814602   73375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:07:29.957009   73375 docker.go:233] disabling docker service ...
	I0930 21:07:29.957091   73375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:07:29.971419   73375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:07:29.362775   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Start
	I0930 21:07:29.363023   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring networks are active...
	I0930 21:07:29.364071   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring network default is active
	I0930 21:07:29.364456   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Ensuring network mk-default-k8s-diff-port-291511 is active
	I0930 21:07:29.364940   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Getting domain xml...
	I0930 21:07:29.365759   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Creating domain...
	I0930 21:07:29.987509   73375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:07:30.112952   73375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:07:30.239945   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:07:30.253298   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:07:30.271687   73375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:07:30.271768   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.282267   73375 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:07:30.282339   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.292776   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.303893   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.315002   73375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:07:30.326410   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.336951   73375 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.356016   73375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:30.367847   73375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:07:30.378650   73375 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:07:30.378703   73375 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:07:30.391768   73375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:07:30.401887   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:30.534771   73375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:07:30.622017   73375 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:07:30.622087   73375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:07:30.627221   73375 start.go:563] Will wait 60s for crictl version
	I0930 21:07:30.627294   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:30.633071   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:07:30.675743   73375 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:07:30.675830   73375 ssh_runner.go:195] Run: crio --version
	I0930 21:07:30.703470   73375 ssh_runner.go:195] Run: crio --version
	I0930 21:07:30.732424   73375 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:07:30.733714   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetIP
	I0930 21:07:30.737016   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:30.737380   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:30.737421   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:30.737690   73375 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0930 21:07:30.741714   73375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:30.754767   73375 kubeadm.go:883] updating cluster {Name:no-preload-997816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:07:30.754892   73375 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:07:30.754941   73375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:30.794489   73375 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:07:30.794516   73375 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 21:07:30.794605   73375 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:30.794624   73375 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:30.794653   73375 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:30.794694   73375 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:30.794733   73375 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:30.794691   73375 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:30.794822   73375 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:30.794836   73375 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0930 21:07:30.796508   73375 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:30.796521   73375 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:30.796538   73375 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:30.796543   73375 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:30.796610   73375 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:30.796616   73375 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:30.796611   73375 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0930 21:07:30.796665   73375 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.018683   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0930 21:07:31.028097   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.117252   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.131998   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.136871   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.140418   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.170883   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.171059   73375 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0930 21:07:31.171098   73375 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.171142   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.172908   73375 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0930 21:07:31.172951   73375 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.172994   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.242489   73375 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0930 21:07:31.242541   73375 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.242609   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.246685   73375 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0930 21:07:31.246731   73375 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.246758   73375 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0930 21:07:31.246778   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.246794   73375 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.246837   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.270923   73375 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0930 21:07:31.270971   73375 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.271024   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.271030   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:31.271100   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.271109   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.271207   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.271269   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.387993   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.388011   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.388044   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.388091   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.388150   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.388230   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.523098   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0930 21:07:31.523156   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0930 21:07:31.523300   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0930 21:07:31.523344   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.523467   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0930 21:07:31.623696   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0930 21:07:31.623759   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 21:07:31.623778   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0930 21:07:31.623794   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:31.623869   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:31.632927   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0930 21:07:31.633014   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.633117   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0930 21:07:31.633206   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0930 21:07:31.633269   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:31.648925   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0930 21:07:31.648945   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.648983   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0930 21:07:31.676886   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0930 21:07:31.676925   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0930 21:07:31.709210   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0930 21:07:31.709287   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0930 21:07:31.709331   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:31.709394   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:31.709330   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0930 21:07:32.112418   73375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:33.634620   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.985614953s)
	I0930 21:07:33.634656   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0930 21:07:33.634702   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.925342294s)
	I0930 21:07:33.634716   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:33.634731   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0930 21:07:33.634771   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.925359685s)
	I0930 21:07:33.634779   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0930 21:07:33.634782   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0930 21:07:33.634853   73375 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.522405881s)
	I0930 21:07:33.634891   73375 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0930 21:07:33.634913   73375 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:33.634961   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:07:30.643828   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting to get IP...
	I0930 21:07:30.644936   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.645382   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.645484   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:30.645381   74769 retry.go:31] will retry after 216.832119ms: waiting for machine to come up
	I0930 21:07:30.863953   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.864583   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:30.864614   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:30.864518   74769 retry.go:31] will retry after 280.448443ms: waiting for machine to come up
	I0930 21:07:31.147184   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.147792   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.147826   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.147728   74769 retry.go:31] will retry after 345.517763ms: waiting for machine to come up
	I0930 21:07:31.495391   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.495819   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.495841   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.495786   74769 retry.go:31] will retry after 457.679924ms: waiting for machine to come up
	I0930 21:07:31.955479   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.955943   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:31.955974   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:31.955897   74769 retry.go:31] will retry after 562.95605ms: waiting for machine to come up
	I0930 21:07:32.520890   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:32.521339   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:32.521368   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:32.521285   74769 retry.go:31] will retry after 743.560182ms: waiting for machine to come up
	I0930 21:07:33.266407   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:33.266914   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:33.266941   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:33.266853   74769 retry.go:31] will retry after 947.444427ms: waiting for machine to come up
	I0930 21:07:34.216195   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:34.216705   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:34.216731   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:34.216659   74769 retry.go:31] will retry after 1.186059526s: waiting for machine to come up
	I0930 21:07:35.714633   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.079826486s)
	I0930 21:07:35.714667   73375 ssh_runner.go:235] Completed: which crictl: (2.079690884s)
	I0930 21:07:35.714721   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:35.714670   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0930 21:07:35.714786   73375 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:35.714821   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0930 21:07:35.753242   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:39.088354   73375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.335055656s)
	I0930 21:07:39.088395   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.373547177s)
	I0930 21:07:39.088422   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0930 21:07:39.088458   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:39.088536   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0930 21:07:39.088459   73375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:35.404773   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:35.405334   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:35.405359   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:35.405225   74769 retry.go:31] will retry after 1.575803783s: waiting for machine to come up
	I0930 21:07:36.983196   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:36.983730   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:36.983759   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:36.983677   74769 retry.go:31] will retry after 2.020561586s: waiting for machine to come up
	I0930 21:07:39.006915   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:39.007304   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:39.007334   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:39.007269   74769 retry.go:31] will retry after 2.801421878s: waiting for machine to come up
	I0930 21:07:41.074012   73375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.985398095s)
	I0930 21:07:41.074061   73375 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0930 21:07:41.074154   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.985588774s)
	I0930 21:07:41.074183   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0930 21:07:41.074202   73375 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:41.074244   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0930 21:07:41.074166   73375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:42.972016   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.897745882s)
	I0930 21:07:42.972055   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0930 21:07:42.972083   73375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.8977868s)
	I0930 21:07:42.972110   73375 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0930 21:07:42.972086   73375 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:42.972155   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0930 21:07:44.835190   73375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.863005436s)
	I0930 21:07:44.835237   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0930 21:07:44.835263   73375 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:44.835334   73375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0930 21:07:41.810719   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:41.811099   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:41.811117   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:41.811050   74769 retry.go:31] will retry after 2.703489988s: waiting for machine to come up
	I0930 21:07:44.515949   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:44.516329   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | unable to find current IP address of domain default-k8s-diff-port-291511 in network mk-default-k8s-diff-port-291511
	I0930 21:07:44.516356   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | I0930 21:07:44.516276   74769 retry.go:31] will retry after 4.001267434s: waiting for machine to come up
	I0930 21:07:49.889033   73900 start.go:364] duration metric: took 4m7.028659379s to acquireMachinesLock for "old-k8s-version-621406"
	I0930 21:07:49.889104   73900 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:07:49.889111   73900 fix.go:54] fixHost starting: 
	I0930 21:07:49.889542   73900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:49.889600   73900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:49.906767   73900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0930 21:07:49.907283   73900 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:49.907856   73900 main.go:141] libmachine: Using API Version  1
	I0930 21:07:49.907889   73900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:49.908203   73900 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:49.908397   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:07:49.908542   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetState
	I0930 21:07:49.910270   73900 fix.go:112] recreateIfNeeded on old-k8s-version-621406: state=Stopped err=<nil>
	I0930 21:07:49.910306   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	W0930 21:07:49.910441   73900 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:07:49.912646   73900 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-621406" ...
	I0930 21:07:45.483728   73375 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0930 21:07:45.483778   73375 cache_images.go:123] Successfully loaded all cached images
	I0930 21:07:45.483785   73375 cache_images.go:92] duration metric: took 14.689240439s to LoadCachedImages
	I0930 21:07:45.483799   73375 kubeadm.go:934] updating node { 192.168.61.93 8443 v1.31.1 crio true true} ...
	I0930 21:07:45.483898   73375 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-997816 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:07:45.483977   73375 ssh_runner.go:195] Run: crio config
	I0930 21:07:45.529537   73375 cni.go:84] Creating CNI manager for ""
	I0930 21:07:45.529558   73375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:45.529567   73375 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:07:45.529591   73375 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.93 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-997816 NodeName:no-preload-997816 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:07:45.529713   73375 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-997816"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.93
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.93"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:07:45.529775   73375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:07:45.540251   73375 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:07:45.540323   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:07:45.549622   73375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0930 21:07:45.565425   73375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:07:45.580646   73375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0930 21:07:45.596216   73375 ssh_runner.go:195] Run: grep 192.168.61.93	control-plane.minikube.internal$ /etc/hosts
	I0930 21:07:45.604940   73375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.93	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:45.620809   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:45.751327   73375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:45.768664   73375 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816 for IP: 192.168.61.93
	I0930 21:07:45.768687   73375 certs.go:194] generating shared ca certs ...
	I0930 21:07:45.768702   73375 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:45.768896   73375 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:07:45.768953   73375 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:07:45.768967   73375 certs.go:256] generating profile certs ...
	I0930 21:07:45.769081   73375 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/client.key
	I0930 21:07:45.769188   73375 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.key.c7192a03
	I0930 21:07:45.769251   73375 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.key
	I0930 21:07:45.769422   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:07:45.769468   73375 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:07:45.769483   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:07:45.769527   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:07:45.769569   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:07:45.769603   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:07:45.769672   73375 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:45.770679   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:07:45.809391   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:07:45.837624   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:07:45.878472   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:07:45.909163   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 21:07:45.950655   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 21:07:45.974391   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:07:45.997258   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/no-preload-997816/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 21:07:46.019976   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:07:46.042828   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:07:46.066625   73375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:07:46.089639   73375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:07:46.106202   73375 ssh_runner.go:195] Run: openssl version
	I0930 21:07:46.111810   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:07:46.122379   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.126659   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.126699   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:07:46.132363   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:07:46.143074   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:07:46.154060   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.158542   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.158602   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:07:46.164210   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:07:46.175160   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:07:46.186326   73375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.190782   73375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.190856   73375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:46.196356   73375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:07:46.206957   73375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:07:46.211650   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:07:46.217398   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:07:46.223566   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:07:46.230204   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:07:46.236404   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:07:46.242282   73375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:07:46.248591   73375 kubeadm.go:392] StartCluster: {Name:no-preload-997816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-997816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:07:46.248686   73375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:07:46.248731   73375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:46.292355   73375 cri.go:89] found id: ""
	I0930 21:07:46.292435   73375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:07:46.303578   73375 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:07:46.303598   73375 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:07:46.303668   73375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:07:46.314544   73375 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:07:46.315643   73375 kubeconfig.go:125] found "no-preload-997816" server: "https://192.168.61.93:8443"
	I0930 21:07:46.318243   73375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:07:46.329751   73375 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.93
	I0930 21:07:46.329781   73375 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:07:46.329791   73375 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:07:46.329837   73375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:46.364302   73375 cri.go:89] found id: ""
	I0930 21:07:46.364392   73375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:07:46.384616   73375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:07:46.395855   73375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:07:46.395875   73375 kubeadm.go:157] found existing configuration files:
	
	I0930 21:07:46.395915   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:07:46.405860   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:07:46.405918   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:07:46.416618   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:07:46.426654   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:07:46.426712   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:07:46.435880   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:07:46.446273   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:07:46.446346   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:07:46.457099   73375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:07:46.467322   73375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:07:46.467386   73375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:07:46.477809   73375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:07:46.489024   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:46.605127   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.509287   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.708716   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.780830   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:47.883843   73375 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:07:47.883940   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.384688   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.884008   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:48.925804   73375 api_server.go:72] duration metric: took 1.041960261s to wait for apiserver process to appear ...
	I0930 21:07:48.925833   73375 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:07:48.925857   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:48.521282   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.521838   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Found IP for machine: 192.168.50.2
	I0930 21:07:48.521864   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Reserving static IP address...
	I0930 21:07:48.521876   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has current primary IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.522306   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Reserved static IP address: 192.168.50.2
	I0930 21:07:48.522349   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-291511", mac: "52:54:00:27:46:45", ip: "192.168.50.2"} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.522361   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Waiting for SSH to be available...
	I0930 21:07:48.522401   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | skip adding static IP to network mk-default-k8s-diff-port-291511 - found existing host DHCP lease matching {name: "default-k8s-diff-port-291511", mac: "52:54:00:27:46:45", ip: "192.168.50.2"}
	I0930 21:07:48.522427   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Getting to WaitForSSH function...
	I0930 21:07:48.525211   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.525641   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.525667   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.525827   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Using SSH client type: external
	I0930 21:07:48.525854   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa (-rw-------)
	I0930 21:07:48.525883   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:07:48.525900   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | About to run SSH command:
	I0930 21:07:48.525913   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | exit 0
	I0930 21:07:48.655656   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | SSH cmd err, output: <nil>: 
	I0930 21:07:48.656045   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetConfigRaw
	I0930 21:07:48.656789   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:48.659902   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.660358   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.660395   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.660586   73707 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/config.json ...
	I0930 21:07:48.660842   73707 machine.go:93] provisionDockerMachine start ...
	I0930 21:07:48.660866   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:48.661063   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.663782   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.664138   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.664165   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.664318   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.664567   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.664733   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.664868   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.665036   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.665283   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.665315   73707 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:07:48.776382   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:07:48.776414   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:48.776676   73707 buildroot.go:166] provisioning hostname "default-k8s-diff-port-291511"
	I0930 21:07:48.776711   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:48.776913   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.779952   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.780470   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.780516   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.780594   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.780773   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.780925   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.781080   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.781253   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.781457   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.781473   73707 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-291511 && echo "default-k8s-diff-port-291511" | sudo tee /etc/hostname
	I0930 21:07:48.913633   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-291511
	
	I0930 21:07:48.913724   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:48.916869   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.917280   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:48.917319   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:48.917501   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:48.917715   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.917882   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:48.918117   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:48.918296   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:48.918533   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:48.918562   73707 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-291511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-291511/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-291511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:07:49.048106   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:07:49.048141   73707 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:07:49.048182   73707 buildroot.go:174] setting up certificates
	I0930 21:07:49.048198   73707 provision.go:84] configureAuth start
	I0930 21:07:49.048212   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetMachineName
	I0930 21:07:49.048498   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:49.051299   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.051665   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.051702   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.051837   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.054211   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.054512   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.054540   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.054691   73707 provision.go:143] copyHostCerts
	I0930 21:07:49.054774   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:07:49.054789   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:07:49.054866   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:07:49.054982   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:07:49.054994   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:07:49.055021   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:07:49.055097   73707 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:07:49.055106   73707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:07:49.055130   73707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:07:49.055189   73707 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-291511 san=[127.0.0.1 192.168.50.2 default-k8s-diff-port-291511 localhost minikube]
	I0930 21:07:49.239713   73707 provision.go:177] copyRemoteCerts
	I0930 21:07:49.239771   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:07:49.239796   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.242146   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.242468   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.242500   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.242663   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.242834   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.242982   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.243200   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.329405   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:07:49.358036   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0930 21:07:49.385742   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:07:49.409436   73707 provision.go:87] duration metric: took 361.22398ms to configureAuth
	I0930 21:07:49.409493   73707 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:07:49.409696   73707 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:49.409798   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.412572   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.412935   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.412975   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.413266   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.413476   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.413680   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.413821   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.414009   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:49.414199   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:49.414223   73707 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:07:49.635490   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:07:49.635553   73707 machine.go:96] duration metric: took 974.696002ms to provisionDockerMachine
	I0930 21:07:49.635567   73707 start.go:293] postStartSetup for "default-k8s-diff-port-291511" (driver="kvm2")
	I0930 21:07:49.635580   73707 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:07:49.635603   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.635954   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:07:49.635989   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.638867   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.639304   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.639340   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.639413   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.639631   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.639837   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.639995   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.728224   73707 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:07:49.732558   73707 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:07:49.732590   73707 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:07:49.732679   73707 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:07:49.732769   73707 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:07:49.732869   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:07:49.742783   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:49.766585   73707 start.go:296] duration metric: took 131.002562ms for postStartSetup
	I0930 21:07:49.766629   73707 fix.go:56] duration metric: took 20.430290493s for fixHost
	I0930 21:07:49.766652   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.769724   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.770143   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.770172   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.770461   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.770708   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.770872   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.771099   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.771240   73707 main.go:141] libmachine: Using SSH client type: native
	I0930 21:07:49.771616   73707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I0930 21:07:49.771636   73707 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:07:49.888863   73707 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730469.865719956
	
	I0930 21:07:49.888889   73707 fix.go:216] guest clock: 1727730469.865719956
	I0930 21:07:49.888900   73707 fix.go:229] Guest: 2024-09-30 21:07:49.865719956 +0000 UTC Remote: 2024-09-30 21:07:49.76663417 +0000 UTC m=+259.507652750 (delta=99.085786ms)
	I0930 21:07:49.888943   73707 fix.go:200] guest clock delta is within tolerance: 99.085786ms
	I0930 21:07:49.888950   73707 start.go:83] releasing machines lock for "default-k8s-diff-port-291511", held for 20.552679126s
	I0930 21:07:49.888982   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.889242   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:49.892424   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.892817   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.892854   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.893030   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893601   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893780   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:07:49.893852   73707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:07:49.893932   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.893934   73707 ssh_runner.go:195] Run: cat /version.json
	I0930 21:07:49.893985   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:07:49.896733   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.896843   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897130   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.897179   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897216   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:49.897233   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:49.897471   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.897478   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:07:49.897679   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.897686   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:07:49.897825   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.897834   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:07:49.897954   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:49.898097   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:07:50.022951   73707 ssh_runner.go:195] Run: systemctl --version
	I0930 21:07:50.029177   73707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:07:50.186430   73707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:07:50.193205   73707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:07:50.193277   73707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:07:50.211330   73707 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:07:50.211365   73707 start.go:495] detecting cgroup driver to use...
	I0930 21:07:50.211430   73707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:07:50.227255   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:07:50.241404   73707 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:07:50.241468   73707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:07:50.257879   73707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:07:50.274595   73707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:07:50.394354   73707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:07:50.567503   73707 docker.go:233] disabling docker service ...
	I0930 21:07:50.567582   73707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:07:50.584390   73707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:07:50.600920   73707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:07:50.742682   73707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:07:50.882835   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:07:50.898340   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:07:50.919395   73707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:07:50.919464   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.930773   73707 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:07:50.930846   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.941870   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.952633   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.964281   73707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:07:50.977410   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:50.988423   73707 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:51.016091   73707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:07:51.027473   73707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:07:51.037470   73707 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:07:51.037537   73707 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:07:51.056841   73707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:07:51.068163   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:51.205357   73707 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:07:51.305327   73707 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:07:51.305410   73707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:07:51.311384   73707 start.go:563] Will wait 60s for crictl version
	I0930 21:07:51.311448   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:07:51.315965   73707 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:07:51.369329   73707 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:07:51.369417   73707 ssh_runner.go:195] Run: crio --version
	I0930 21:07:51.399897   73707 ssh_runner.go:195] Run: crio --version
	I0930 21:07:51.431075   73707 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:07:49.914747   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .Start
	I0930 21:07:49.914948   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring networks are active...
	I0930 21:07:49.915796   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network default is active
	I0930 21:07:49.916225   73900 main.go:141] libmachine: (old-k8s-version-621406) Ensuring network mk-old-k8s-version-621406 is active
	I0930 21:07:49.916890   73900 main.go:141] libmachine: (old-k8s-version-621406) Getting domain xml...
	I0930 21:07:49.917688   73900 main.go:141] libmachine: (old-k8s-version-621406) Creating domain...
	I0930 21:07:51.277867   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting to get IP...
	I0930 21:07:51.279001   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.279451   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.279552   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.279437   74917 retry.go:31] will retry after 307.582619ms: waiting for machine to come up
	I0930 21:07:51.589030   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.589414   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.589445   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.589368   74917 retry.go:31] will retry after 370.683214ms: waiting for machine to come up
	I0930 21:07:51.961914   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:51.962474   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:51.962511   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:51.962415   74917 retry.go:31] will retry after 428.703419ms: waiting for machine to come up
	I0930 21:07:52.393154   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.393682   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.393750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.393673   74917 retry.go:31] will retry after 514.254023ms: waiting for machine to come up
	I0930 21:07:52.334804   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:07:52.334846   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:07:52.334863   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.377601   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:07:52.377632   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:07:52.426784   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.473771   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:52.473811   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:52.926391   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:52.945122   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:52.945154   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:53.426295   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:53.434429   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:07:53.434464   73375 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:07:53.926642   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:07:53.931501   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0930 21:07:53.940069   73375 api_server.go:141] control plane version: v1.31.1
	I0930 21:07:53.940104   73375 api_server.go:131] duration metric: took 5.014262318s to wait for apiserver health ...
	I0930 21:07:53.940115   73375 cni.go:84] Creating CNI manager for ""
	I0930 21:07:53.940123   73375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:53.941879   73375 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:07:53.943335   73375 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:07:53.959585   73375 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:07:53.996310   73375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:07:54.010070   73375 system_pods.go:59] 8 kube-system pods found
	I0930 21:07:54.010129   73375 system_pods.go:61] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:07:54.010142   73375 system_pods.go:61] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:07:54.010157   73375 system_pods.go:61] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:07:54.010174   73375 system_pods.go:61] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:07:54.010186   73375 system_pods.go:61] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:07:54.010200   73375 system_pods.go:61] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:07:54.010212   73375 system_pods.go:61] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:07:54.010223   73375 system_pods.go:61] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:07:54.010232   73375 system_pods.go:74] duration metric: took 13.897885ms to wait for pod list to return data ...
	I0930 21:07:54.010244   73375 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:07:54.019651   73375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:07:54.019683   73375 node_conditions.go:123] node cpu capacity is 2
	I0930 21:07:54.019697   73375 node_conditions.go:105] duration metric: took 9.446744ms to run NodePressure ...
	I0930 21:07:54.019719   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:54.314348   73375 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:07:54.319583   73375 kubeadm.go:739] kubelet initialised
	I0930 21:07:54.319613   73375 kubeadm.go:740] duration metric: took 5.232567ms waiting for restarted kubelet to initialise ...
	I0930 21:07:54.319625   73375 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:07:54.326866   73375 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.333592   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.333628   73375 pod_ready.go:82] duration metric: took 6.72431ms for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.333640   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.333651   73375 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.340155   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "etcd-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.340194   73375 pod_ready.go:82] duration metric: took 6.533127ms for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.340208   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "etcd-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.340216   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.346494   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-apiserver-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.346530   73375 pod_ready.go:82] duration metric: took 6.304143ms for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.346542   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-apiserver-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.346551   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.403699   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.403731   73375 pod_ready.go:82] duration metric: took 57.168471ms for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.403743   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.403752   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:54.800372   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-proxy-klcv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.800410   73375 pod_ready.go:82] duration metric: took 396.646883ms for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:54.800423   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-proxy-klcv8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:54.800432   73375 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:51.432761   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetIP
	I0930 21:07:51.436278   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:51.436659   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:07:51.436700   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:07:51.436931   73707 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0930 21:07:51.441356   73707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:51.454358   73707 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-291511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:07:51.454484   73707 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:07:51.454547   73707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:51.502072   73707 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:07:51.502143   73707 ssh_runner.go:195] Run: which lz4
	I0930 21:07:51.506458   73707 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:07:51.510723   73707 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:07:51.510756   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 21:07:52.792488   73707 crio.go:462] duration metric: took 1.286075452s to copy over tarball
	I0930 21:07:52.792580   73707 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:07:55.207282   73707 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.414661305s)
	I0930 21:07:55.207314   73707 crio.go:469] duration metric: took 2.414793514s to extract the tarball
	I0930 21:07:55.207321   73707 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:07:55.244001   73707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:07:55.287097   73707 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 21:07:55.287124   73707 cache_images.go:84] Images are preloaded, skipping loading
	I0930 21:07:55.287133   73707 kubeadm.go:934] updating node { 192.168.50.2 8444 v1.31.1 crio true true} ...
	I0930 21:07:55.287277   73707 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-291511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:07:55.287384   73707 ssh_runner.go:195] Run: crio config
	I0930 21:07:55.200512   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "kube-scheduler-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.200559   73375 pod_ready.go:82] duration metric: took 400.11341ms for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:55.200569   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "kube-scheduler-no-preload-997816" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.200577   73375 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	I0930 21:07:55.601008   73375 pod_ready.go:98] node "no-preload-997816" hosting pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.601042   73375 pod_ready.go:82] duration metric: took 400.453601ms for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	E0930 21:07:55.601055   73375 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-997816" hosting pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:55.601065   73375 pod_ready.go:39] duration metric: took 1.281429189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:07:55.601086   73375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:07:55.617767   73375 ops.go:34] apiserver oom_adj: -16
	I0930 21:07:55.617791   73375 kubeadm.go:597] duration metric: took 9.314187459s to restartPrimaryControlPlane
	I0930 21:07:55.617803   73375 kubeadm.go:394] duration metric: took 9.369220314s to StartCluster
	I0930 21:07:55.617824   73375 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.617913   73375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:07:55.619455   73375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.619760   73375 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:07:55.619842   73375 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:07:55.619959   73375 addons.go:69] Setting storage-provisioner=true in profile "no-preload-997816"
	I0930 21:07:55.619984   73375 addons.go:234] Setting addon storage-provisioner=true in "no-preload-997816"
	I0930 21:07:55.619974   73375 addons.go:69] Setting default-storageclass=true in profile "no-preload-997816"
	I0930 21:07:55.620003   73375 addons.go:69] Setting metrics-server=true in profile "no-preload-997816"
	I0930 21:07:55.620009   73375 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-997816"
	I0930 21:07:55.620020   73375 addons.go:234] Setting addon metrics-server=true in "no-preload-997816"
	W0930 21:07:55.620031   73375 addons.go:243] addon metrics-server should already be in state true
	I0930 21:07:55.620050   73375 config.go:182] Loaded profile config "no-preload-997816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:07:55.620061   73375 host.go:66] Checking if "no-preload-997816" exists ...
	W0930 21:07:55.619994   73375 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:07:55.620124   73375 host.go:66] Checking if "no-preload-997816" exists ...
	I0930 21:07:55.620420   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620459   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.620494   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620535   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.620593   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.620634   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.621682   73375 out.go:177] * Verifying Kubernetes components...
	I0930 21:07:55.623102   73375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:55.643690   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0930 21:07:55.643895   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35545
	I0930 21:07:55.644411   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.644553   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.644968   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.644981   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.645072   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.645078   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.645314   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.645502   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.645732   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.645777   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.645812   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.649244   73375 addons.go:234] Setting addon default-storageclass=true in "no-preload-997816"
	W0930 21:07:55.649262   73375 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:07:55.649283   73375 host.go:66] Checking if "no-preload-997816" exists ...
	I0930 21:07:55.649524   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.649548   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.671077   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42635
	I0930 21:07:55.671558   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.672193   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.672212   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.672505   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45163
	I0930 21:07:55.672736   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.672808   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I0930 21:07:55.673354   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.673396   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.673920   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.673926   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.674528   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.674545   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.674974   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.675624   73375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:07:55.675658   73375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:07:55.676078   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.676095   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.676547   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.676724   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.679115   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.681410   73375 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:07:55.688953   73375 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:07:55.688981   73375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:07:55.689015   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.693338   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.693996   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.694023   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.694212   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.694344   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.694444   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.694545   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.696037   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0930 21:07:55.696535   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.697185   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.697207   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.697567   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.697772   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.699797   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.700998   73375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0930 21:07:55.701429   73375 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:07:55.702094   73375 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:07:52.909622   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:52.910169   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:52.910202   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:52.910132   74917 retry.go:31] will retry after 605.019848ms: waiting for machine to come up
	I0930 21:07:53.517276   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:53.517911   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:53.517943   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:53.517858   74917 retry.go:31] will retry after 856.018614ms: waiting for machine to come up
	I0930 21:07:54.376343   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:54.376838   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:54.376862   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:54.376794   74917 retry.go:31] will retry after 740.749778ms: waiting for machine to come up
	I0930 21:07:55.119090   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:55.119631   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:55.119660   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:55.119583   74917 retry.go:31] will retry after 1.444139076s: waiting for machine to come up
	I0930 21:07:56.566261   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:56.566744   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:56.566771   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:56.566695   74917 retry.go:31] will retry after 1.681362023s: waiting for machine to come up
	I0930 21:07:55.703687   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:07:55.703709   73375 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:07:55.703736   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.703788   73375 main.go:141] libmachine: Using API Version  1
	I0930 21:07:55.703816   73375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:07:55.704295   73375 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:07:55.704553   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetState
	I0930 21:07:55.707029   73375 main.go:141] libmachine: (no-preload-997816) Calling .DriverName
	I0930 21:07:55.707365   73375 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:07:55.707385   73375 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:07:55.707408   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHHostname
	I0930 21:07:55.708091   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.708606   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.708629   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.709024   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.709237   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.709388   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.709573   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.711123   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.711607   73375 main.go:141] libmachine: (no-preload-997816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3d:73", ip: ""} in network mk-no-preload-997816: {Iface:virbr2 ExpiryTime:2024-09-30 22:07:22 +0000 UTC Type:0 Mac:52:54:00:cb:3d:73 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-997816 Clientid:01:52:54:00:cb:3d:73}
	I0930 21:07:55.711631   73375 main.go:141] libmachine: (no-preload-997816) DBG | domain no-preload-997816 has defined IP address 192.168.61.93 and MAC address 52:54:00:cb:3d:73 in network mk-no-preload-997816
	I0930 21:07:55.711987   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHPort
	I0930 21:07:55.712178   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHKeyPath
	I0930 21:07:55.712318   73375 main.go:141] libmachine: (no-preload-997816) Calling .GetSSHUsername
	I0930 21:07:55.712469   73375 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/no-preload-997816/id_rsa Username:docker}
	I0930 21:07:55.888447   73375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:55.912060   73375 node_ready.go:35] waiting up to 6m0s for node "no-preload-997816" to be "Ready" ...
	I0930 21:07:56.010903   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:07:56.012576   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:07:56.012601   73375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:07:56.038592   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:07:56.055481   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:07:56.055513   73375 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:07:56.131820   73375 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:07:56.131844   73375 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:07:56.213605   73375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:07:57.078385   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.067447636s)
	I0930 21:07:57.078439   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.078451   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.078770   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.078823   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:57.078836   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.078845   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.078793   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:57.079118   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:57.079149   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.079157   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:57.672706   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:57.672737   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:57.673053   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:57.673072   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:58.301165   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:07:59.072488   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.858837368s)
	I0930 21:07:59.072565   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.072582   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.072921   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.072986   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073029   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073038   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.073221   73375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.034599023s)
	I0930 21:07:59.073271   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073344   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.073383   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073397   73375 addons.go:475] Verifying addon metrics-server=true in "no-preload-997816"
	I0930 21:07:59.073347   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.073754   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:59.073804   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.073819   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.073834   73375 main.go:141] libmachine: Making call to close driver server
	I0930 21:07:59.073846   73375 main.go:141] libmachine: (no-preload-997816) Calling .Close
	I0930 21:07:59.075323   73375 main.go:141] libmachine: (no-preload-997816) DBG | Closing plugin on server side
	I0930 21:07:59.075329   73375 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:07:59.075353   73375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:07:59.077687   73375 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0930 21:07:59.079278   73375 addons.go:510] duration metric: took 3.459453938s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0930 21:07:55.346656   73707 cni.go:84] Creating CNI manager for ""
	I0930 21:07:55.346679   73707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:07:55.346688   73707 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:07:55.346718   73707 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-291511 NodeName:default-k8s-diff-port-291511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:07:55.346847   73707 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-291511"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:07:55.346903   73707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:07:55.356645   73707 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:07:55.356708   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:07:55.366457   73707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0930 21:07:55.384639   73707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:07:55.403208   73707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0930 21:07:55.421878   73707 ssh_runner.go:195] Run: grep 192.168.50.2	control-plane.minikube.internal$ /etc/hosts
	I0930 21:07:55.425803   73707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:07:55.439370   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:07:55.553575   73707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:07:55.570754   73707 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511 for IP: 192.168.50.2
	I0930 21:07:55.570787   73707 certs.go:194] generating shared ca certs ...
	I0930 21:07:55.570808   73707 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:07:55.571011   73707 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:07:55.571067   73707 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:07:55.571083   73707 certs.go:256] generating profile certs ...
	I0930 21:07:55.571178   73707 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/client.key
	I0930 21:07:55.571270   73707 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.key.2e3224d9
	I0930 21:07:55.571326   73707 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.key
	I0930 21:07:55.571464   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:07:55.571510   73707 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:07:55.571522   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:07:55.571587   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:07:55.571627   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:07:55.571655   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:07:55.571719   73707 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:07:55.572367   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:07:55.606278   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:07:55.645629   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:07:55.690514   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:07:55.737445   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 21:07:55.773656   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 21:07:55.804015   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:07:55.830210   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/default-k8s-diff-port-291511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:07:55.857601   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:07:55.887765   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:07:55.922053   73707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:07:55.951040   73707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:07:55.969579   73707 ssh_runner.go:195] Run: openssl version
	I0930 21:07:55.975576   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:07:55.987255   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:07:55.993657   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:07:55.993723   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:07:56.001878   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:07:56.017528   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:07:56.030398   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.035552   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.035625   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:07:56.043878   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:07:56.055384   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:07:56.066808   73707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.073099   73707 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.073164   73707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:07:56.081343   73707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:07:56.096669   73707 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:07:56.102635   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:07:56.110805   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:07:56.118533   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:07:56.125800   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:07:56.133985   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:07:56.142109   73707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:07:56.150433   73707 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-291511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-291511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:07:56.150538   73707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:07:56.150608   73707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:56.197936   73707 cri.go:89] found id: ""
	I0930 21:07:56.198016   73707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:07:56.208133   73707 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:07:56.208155   73707 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:07:56.208204   73707 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:07:56.218880   73707 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:07:56.220322   73707 kubeconfig.go:125] found "default-k8s-diff-port-291511" server: "https://192.168.50.2:8444"
	I0930 21:07:56.223557   73707 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:07:56.233844   73707 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.2
	I0930 21:07:56.233876   73707 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:07:56.233889   73707 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:07:56.233970   73707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:07:56.280042   73707 cri.go:89] found id: ""
	I0930 21:07:56.280129   73707 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:07:56.304291   73707 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:07:56.317987   73707 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:07:56.318012   73707 kubeadm.go:157] found existing configuration files:
	
	I0930 21:07:56.318076   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0930 21:07:56.331377   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:07:56.331448   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:07:56.342380   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0930 21:07:56.354949   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:07:56.355030   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:07:56.368385   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0930 21:07:56.378798   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:07:56.378883   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:07:56.390167   73707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0930 21:07:56.400338   73707 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:07:56.400413   73707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:07:56.410735   73707 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:07:56.426910   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:56.557126   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.682738   73707 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125574645s)
	I0930 21:07:57.682777   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.908684   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:57.983925   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:07:58.088822   73707 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:07:58.088930   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:58.589565   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:59.089483   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:07:59.110240   73707 api_server.go:72] duration metric: took 1.021416929s to wait for apiserver process to appear ...
	I0930 21:07:59.110279   73707 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:07:59.110328   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:07:59.110843   73707 api_server.go:269] stopped: https://192.168.50.2:8444/healthz: Get "https://192.168.50.2:8444/healthz": dial tcp 192.168.50.2:8444: connect: connection refused
	I0930 21:07:59.611045   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:07:58.250468   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:07:58.251041   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:07:58.251062   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:07:58.250979   74917 retry.go:31] will retry after 2.260492343s: waiting for machine to come up
	I0930 21:08:00.513613   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:00.514129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:00.514194   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:00.514117   74917 retry.go:31] will retry after 2.449694064s: waiting for machine to come up
	I0930 21:08:02.200888   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:02.200918   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:02.200930   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:02.240477   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:02.240513   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:02.611111   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:02.615548   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:02.615578   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:03.111216   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:03.118078   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:03.118102   73707 api_server.go:103] status: https://192.168.50.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:03.610614   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:08:03.615203   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 200:
	ok
	I0930 21:08:03.621652   73707 api_server.go:141] control plane version: v1.31.1
	I0930 21:08:03.621680   73707 api_server.go:131] duration metric: took 4.511393989s to wait for apiserver health ...
	I0930 21:08:03.621689   73707 cni.go:84] Creating CNI manager for ""
	I0930 21:08:03.621694   73707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:03.624026   73707 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:08:00.416356   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:08:02.416469   73375 node_ready.go:53] node "no-preload-997816" has status "Ready":"False"
	I0930 21:08:02.916643   73375 node_ready.go:49] node "no-preload-997816" has status "Ready":"True"
	I0930 21:08:02.916668   73375 node_ready.go:38] duration metric: took 7.004576501s for node "no-preload-997816" to be "Ready" ...
	I0930 21:08:02.916679   73375 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:02.922833   73375 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:02.928873   73375 pod_ready.go:93] pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:02.928895   73375 pod_ready.go:82] duration metric: took 6.034388ms for pod "coredns-7c65d6cfc9-jg8ph" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:02.928904   73375 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.934668   73375 pod_ready.go:103] pod "etcd-no-preload-997816" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:03.625416   73707 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:08:03.640241   73707 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:08:03.664231   73707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:08:03.679372   73707 system_pods.go:59] 8 kube-system pods found
	I0930 21:08:03.679409   73707 system_pods.go:61] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:08:03.679425   73707 system_pods.go:61] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:08:03.679435   73707 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:08:03.679447   73707 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:08:03.679456   73707 system_pods.go:61] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 21:08:03.679466   73707 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:08:03.679472   73707 system_pods.go:61] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:08:03.679482   73707 system_pods.go:61] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 21:08:03.679490   73707 system_pods.go:74] duration metric: took 15.234407ms to wait for pod list to return data ...
	I0930 21:08:03.679509   73707 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:08:03.698332   73707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:08:03.698363   73707 node_conditions.go:123] node cpu capacity is 2
	I0930 21:08:03.698374   73707 node_conditions.go:105] duration metric: took 18.857709ms to run NodePressure ...
	I0930 21:08:03.698394   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:03.968643   73707 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:08:03.974075   73707 kubeadm.go:739] kubelet initialised
	I0930 21:08:03.974098   73707 kubeadm.go:740] duration metric: took 5.424573ms waiting for restarted kubelet to initialise ...
	I0930 21:08:03.974105   73707 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:03.982157   73707 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:03.989298   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.989329   73707 pod_ready.go:82] duration metric: took 7.140381ms for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:03.989338   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.989345   73707 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:03.995739   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.995773   73707 pod_ready.go:82] duration metric: took 6.418854ms for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:03.995787   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:03.995797   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.002071   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.002093   73707 pod_ready.go:82] duration metric: took 6.287919ms for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.002104   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.002110   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.071732   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.071760   73707 pod_ready.go:82] duration metric: took 69.643681ms for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.071771   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.071777   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.468580   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-proxy-kwp22" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.468605   73707 pod_ready.go:82] duration metric: took 396.820558ms for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.468614   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-proxy-kwp22" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.468620   73707 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:04.868042   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.868067   73707 pod_ready.go:82] duration metric: took 399.438278ms for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:04.868078   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:04.868085   73707 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.267893   73707 pod_ready.go:98] node "default-k8s-diff-port-291511" hosting pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:05.267925   73707 pod_ready.go:82] duration metric: took 399.831615ms for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:05.267937   73707 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-291511" hosting pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:05.267945   73707 pod_ready.go:39] duration metric: took 1.293832472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:05.267960   73707 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:08:05.282162   73707 ops.go:34] apiserver oom_adj: -16
	I0930 21:08:05.282188   73707 kubeadm.go:597] duration metric: took 9.074027172s to restartPrimaryControlPlane
	I0930 21:08:05.282199   73707 kubeadm.go:394] duration metric: took 9.131777336s to StartCluster
	I0930 21:08:05.282216   73707 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:05.282338   73707 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:08:05.283862   73707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:05.284135   73707 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:08:05.284201   73707 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:08:05.284287   73707 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284305   73707 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.284313   73707 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:08:05.284340   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.284339   73707 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284385   73707 config.go:182] Loaded profile config "default-k8s-diff-port-291511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:08:05.284399   73707 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-291511"
	I0930 21:08:05.284359   73707 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-291511"
	I0930 21:08:05.284432   73707 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.284448   73707 addons.go:243] addon metrics-server should already be in state true
	I0930 21:08:05.284486   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.284739   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284760   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284784   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.284794   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.284890   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.284931   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.286020   73707 out.go:177] * Verifying Kubernetes components...
	I0930 21:08:05.287268   73707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:05.302045   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0930 21:08:05.302587   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.303190   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.303219   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.303631   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.304213   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.304258   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.304484   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0930 21:08:05.304676   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0930 21:08:05.304884   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.305175   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.305353   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.305377   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.305642   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.305660   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.305724   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.305933   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.306016   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.306580   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.306623   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.309757   73707 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-291511"
	W0930 21:08:05.309778   73707 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:08:05.309805   73707 host.go:66] Checking if "default-k8s-diff-port-291511" exists ...
	I0930 21:08:05.310163   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.310208   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.320335   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0930 21:08:05.320928   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.321496   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.321520   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.321922   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.322082   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.324111   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.325867   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0930 21:08:05.325879   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0930 21:08:05.326252   73707 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:08:05.326337   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.326280   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.326847   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.326862   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.326982   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.326999   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.327239   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.327313   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.327467   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:08:05.327485   73707 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:08:05.327507   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.327597   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.327778   73707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:05.327806   73707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:05.329862   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.331454   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.331654   73707 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:05.331959   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.331996   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.332184   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.332355   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.332577   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.332699   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.332956   73707 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:08:05.332972   73707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:08:05.332990   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.336234   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.336634   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.336661   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.336885   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.337134   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.337271   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.337447   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.345334   73707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34613
	I0930 21:08:05.345908   73707 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:05.346393   73707 main.go:141] libmachine: Using API Version  1
	I0930 21:08:05.346424   73707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:05.346749   73707 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:05.346887   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetState
	I0930 21:08:05.348836   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .DriverName
	I0930 21:08:05.349033   73707 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:08:05.349048   73707 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:08:05.349067   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHHostname
	I0930 21:08:05.351835   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.352222   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:46:45", ip: ""} in network mk-default-k8s-diff-port-291511: {Iface:virbr3 ExpiryTime:2024-09-30 22:07:40 +0000 UTC Type:0 Mac:52:54:00:27:46:45 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:default-k8s-diff-port-291511 Clientid:01:52:54:00:27:46:45}
	I0930 21:08:05.352277   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | domain default-k8s-diff-port-291511 has defined IP address 192.168.50.2 and MAC address 52:54:00:27:46:45 in network mk-default-k8s-diff-port-291511
	I0930 21:08:05.352401   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHPort
	I0930 21:08:05.352644   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHKeyPath
	I0930 21:08:05.352786   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .GetSSHUsername
	I0930 21:08:05.352886   73707 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/default-k8s-diff-port-291511/id_rsa Username:docker}
	I0930 21:08:05.475274   73707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:05.496035   73707 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-291511" to be "Ready" ...
	I0930 21:08:05.564715   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:08:05.574981   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:08:05.575006   73707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:08:05.613799   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:08:05.613822   73707 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:08:05.618503   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:08:05.689563   73707 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:08:05.689588   73707 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:08:05.769327   73707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:08:06.831657   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266911261s)
	I0930 21:08:06.831717   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.831727   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.831735   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.213199657s)
	I0930 21:08:06.831780   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.831797   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832054   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832071   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832079   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.832086   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832146   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832164   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832182   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832195   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.832203   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.832291   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832305   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.832316   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832477   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.832483   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.832512   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.838509   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.838534   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.838786   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.838801   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.838806   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.956747   73707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.187373699s)
	I0930 21:08:06.956803   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.956819   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.957097   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.958516   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.958531   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.958542   73707 main.go:141] libmachine: Making call to close driver server
	I0930 21:08:06.958548   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) Calling .Close
	I0930 21:08:06.958842   73707 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:08:06.958863   73707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:08:06.958873   73707 main.go:141] libmachine: (default-k8s-diff-port-291511) DBG | Closing plugin on server side
	I0930 21:08:06.958875   73707 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-291511"
	I0930 21:08:06.961299   73707 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0930 21:08:02.965767   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:02.966135   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:02.966157   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:02.966086   74917 retry.go:31] will retry after 2.951226221s: waiting for machine to come up
	I0930 21:08:05.919389   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:05.919894   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | unable to find current IP address of domain old-k8s-version-621406 in network mk-old-k8s-version-621406
	I0930 21:08:05.919937   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | I0930 21:08:05.919827   74917 retry.go:31] will retry after 2.747969391s: waiting for machine to come up
	I0930 21:08:09.916514   73256 start.go:364] duration metric: took 52.875691449s to acquireMachinesLock for "embed-certs-256103"
	I0930 21:08:09.916583   73256 start.go:96] Skipping create...Using existing machine configuration
	I0930 21:08:09.916592   73256 fix.go:54] fixHost starting: 
	I0930 21:08:09.916972   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:08:09.917000   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:08:09.935009   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0930 21:08:09.935493   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:08:09.936052   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:08:09.936073   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:08:09.936443   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:08:09.936617   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:09.936762   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:08:09.938608   73256 fix.go:112] recreateIfNeeded on embed-certs-256103: state=Stopped err=<nil>
	I0930 21:08:09.938639   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	W0930 21:08:09.938811   73256 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 21:08:09.940789   73256 out.go:177] * Restarting existing kvm2 VM for "embed-certs-256103" ...
	I0930 21:08:05.936626   73375 pod_ready.go:93] pod "etcd-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:05.936660   73375 pod_ready.go:82] duration metric: took 3.007747597s for pod "etcd-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.936674   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.942154   73375 pod_ready.go:93] pod "kube-apiserver-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:05.942196   73375 pod_ready.go:82] duration metric: took 5.502965ms for pod "kube-apiserver-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:05.942209   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.949366   73375 pod_ready.go:93] pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.949402   73375 pod_ready.go:82] duration metric: took 1.007183809s for pod "kube-controller-manager-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.949413   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.955060   73375 pod_ready.go:93] pod "kube-proxy-klcv8" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.955088   73375 pod_ready.go:82] duration metric: took 5.667172ms for pod "kube-proxy-klcv8" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.955100   73375 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.961684   73375 pod_ready.go:93] pod "kube-scheduler-no-preload-997816" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:06.961706   73375 pod_ready.go:82] duration metric: took 6.597856ms for pod "kube-scheduler-no-preload-997816" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:06.961718   73375 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:08.967525   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:06.962594   73707 addons.go:510] duration metric: took 1.678396512s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0930 21:08:07.499805   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:09.500771   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:08.671179   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.671686   73900 main.go:141] libmachine: (old-k8s-version-621406) Found IP for machine: 192.168.72.159
	I0930 21:08:08.671711   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserving static IP address...
	I0930 21:08:08.671729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has current primary IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.672178   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.672220   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | skip adding static IP to network mk-old-k8s-version-621406 - found existing host DHCP lease matching {name: "old-k8s-version-621406", mac: "52:54:00:9b:e3:ab", ip: "192.168.72.159"}
	I0930 21:08:08.672231   73900 main.go:141] libmachine: (old-k8s-version-621406) Reserved static IP address: 192.168.72.159
	I0930 21:08:08.672246   73900 main.go:141] libmachine: (old-k8s-version-621406) Waiting for SSH to be available...
	I0930 21:08:08.672254   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Getting to WaitForSSH function...
	I0930 21:08:08.674566   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.674931   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.674969   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.675128   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH client type: external
	I0930 21:08:08.675170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa (-rw-------)
	I0930 21:08:08.675212   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:08:08.675229   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | About to run SSH command:
	I0930 21:08:08.675244   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | exit 0
	I0930 21:08:08.799368   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | SSH cmd err, output: <nil>: 
	I0930 21:08:08.799751   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetConfigRaw
	I0930 21:08:08.800421   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:08.803151   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803596   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.803620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.803922   73900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/config.json ...
	I0930 21:08:08.804195   73900 machine.go:93] provisionDockerMachine start ...
	I0930 21:08:08.804246   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:08.804502   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.806822   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807240   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.807284   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.807521   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.807735   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.807890   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.808077   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.808239   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.808480   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.808493   73900 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:08:08.912058   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:08:08.912135   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912407   73900 buildroot.go:166] provisioning hostname "old-k8s-version-621406"
	I0930 21:08:08.912432   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:08.912662   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:08.915366   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:08.915750   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:08.915892   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:08.916107   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916330   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:08.916492   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:08.916673   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:08.916932   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:08.916957   73900 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-621406 && echo "old-k8s-version-621406" | sudo tee /etc/hostname
	I0930 21:08:09.034260   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-621406
	
	I0930 21:08:09.034296   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.037149   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037509   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.037538   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.037799   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.037986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038163   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.038327   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.038473   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.038695   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.038714   73900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-621406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-621406/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-621406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:08:09.152190   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:08:09.152228   73900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:08:09.152255   73900 buildroot.go:174] setting up certificates
	I0930 21:08:09.152275   73900 provision.go:84] configureAuth start
	I0930 21:08:09.152288   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetMachineName
	I0930 21:08:09.152577   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.155203   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155589   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.155620   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.155783   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.157964   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158362   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.158392   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.158520   73900 provision.go:143] copyHostCerts
	I0930 21:08:09.158592   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:08:09.158605   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:08:09.158704   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:08:09.158851   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:08:09.158864   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:08:09.158895   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:08:09.158970   73900 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:08:09.158977   73900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:08:09.158996   73900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:08:09.159054   73900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-621406 san=[127.0.0.1 192.168.72.159 localhost minikube old-k8s-version-621406]
	I0930 21:08:09.301267   73900 provision.go:177] copyRemoteCerts
	I0930 21:08:09.301322   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:08:09.301349   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.304344   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304766   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.304796   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.304998   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.305187   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.305321   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.305439   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.390851   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0930 21:08:09.415712   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 21:08:09.439567   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:08:09.463427   73900 provision.go:87] duration metric: took 311.139024ms to configureAuth
	I0930 21:08:09.463459   73900 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:08:09.463713   73900 config.go:182] Loaded profile config "old-k8s-version-621406": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 21:08:09.463809   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.466757   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467129   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.467160   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.467326   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.467513   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467694   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.467843   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.468004   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.468175   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.468190   73900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:08:09.684657   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:08:09.684684   73900 machine.go:96] duration metric: took 880.473418ms to provisionDockerMachine
	I0930 21:08:09.684698   73900 start.go:293] postStartSetup for "old-k8s-version-621406" (driver="kvm2")
	I0930 21:08:09.684709   73900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:08:09.684730   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.685075   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:08:09.685114   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.688051   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688517   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.688542   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.688725   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.688928   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.689070   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.689265   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.770572   73900 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:08:09.775149   73900 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:08:09.775181   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:08:09.775268   73900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:08:09.775364   73900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:08:09.775453   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:08:09.784753   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:09.807989   73900 start.go:296] duration metric: took 123.276522ms for postStartSetup
	I0930 21:08:09.808033   73900 fix.go:56] duration metric: took 19.918922935s for fixHost
	I0930 21:08:09.808053   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.811242   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811656   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.811692   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.811852   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.812064   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812239   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.812380   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.812522   73900 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:09.812704   73900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0930 21:08:09.812719   73900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:08:09.916349   73900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730489.889323893
	
	I0930 21:08:09.916376   73900 fix.go:216] guest clock: 1727730489.889323893
	I0930 21:08:09.916384   73900 fix.go:229] Guest: 2024-09-30 21:08:09.889323893 +0000 UTC Remote: 2024-09-30 21:08:09.808037625 +0000 UTC m=+267.093327666 (delta=81.286268ms)
	I0930 21:08:09.916403   73900 fix.go:200] guest clock delta is within tolerance: 81.286268ms
	I0930 21:08:09.916408   73900 start.go:83] releasing machines lock for "old-k8s-version-621406", held for 20.027328296s
	I0930 21:08:09.916440   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.916766   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:09.919729   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920070   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.920105   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.920238   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.920831   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921050   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .DriverName
	I0930 21:08:09.921182   73900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:08:09.921235   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.921328   73900 ssh_runner.go:195] Run: cat /version.json
	I0930 21:08:09.921351   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHHostname
	I0930 21:08:09.924258   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924650   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.924695   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924722   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.924805   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.924986   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925170   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:09.925176   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925206   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:09.925341   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:09.925405   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHPort
	I0930 21:08:09.925534   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHKeyPath
	I0930 21:08:09.925698   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetSSHUsername
	I0930 21:08:09.925829   73900 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/old-k8s-version-621406/id_rsa Username:docker}
	I0930 21:08:10.043500   73900 ssh_runner.go:195] Run: systemctl --version
	I0930 21:08:10.051029   73900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:08:10.199844   73900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:08:10.206433   73900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:08:10.206519   73900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:08:10.223346   73900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:08:10.223375   73900 start.go:495] detecting cgroup driver to use...
	I0930 21:08:10.223449   73900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:08:10.241056   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:08:10.257197   73900 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:08:10.257261   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:08:10.271847   73900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:08:10.287465   73900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:08:10.419248   73900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:08:10.583440   73900 docker.go:233] disabling docker service ...
	I0930 21:08:10.583518   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:08:10.599561   73900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:08:10.613321   73900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:08:10.763071   73900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:08:10.891222   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:08:10.906985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:08:10.927838   73900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0930 21:08:10.927911   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.940002   73900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:08:10.940084   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.953143   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.965922   73900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:10.985782   73900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:08:11.001825   73900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:08:11.015777   73900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:08:11.015835   73900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:08:11.034821   73900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:08:11.049855   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:11.203755   73900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:08:11.312949   73900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:08:11.313060   73900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:08:11.319280   73900 start.go:563] Will wait 60s for crictl version
	I0930 21:08:11.319355   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:11.323826   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:08:11.374934   73900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:08:11.375023   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.415466   73900 ssh_runner.go:195] Run: crio --version
	I0930 21:08:11.449622   73900 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0930 21:08:11.450773   73900 main.go:141] libmachine: (old-k8s-version-621406) Calling .GetIP
	I0930 21:08:11.454019   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454504   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:e3:ab", ip: ""} in network mk-old-k8s-version-621406: {Iface:virbr4 ExpiryTime:2024-09-30 22:08:01 +0000 UTC Type:0 Mac:52:54:00:9b:e3:ab Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:old-k8s-version-621406 Clientid:01:52:54:00:9b:e3:ab}
	I0930 21:08:11.454534   73900 main.go:141] libmachine: (old-k8s-version-621406) DBG | domain old-k8s-version-621406 has defined IP address 192.168.72.159 and MAC address 52:54:00:9b:e3:ab in network mk-old-k8s-version-621406
	I0930 21:08:11.454807   73900 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0930 21:08:11.459034   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:11.473162   73900 kubeadm.go:883] updating cluster {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:08:11.473294   73900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 21:08:11.473367   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:11.518200   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:11.518275   73900 ssh_runner.go:195] Run: which lz4
	I0930 21:08:11.522442   73900 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:08:11.526704   73900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:08:11.526752   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0930 21:08:09.942356   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Start
	I0930 21:08:09.942591   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring networks are active...
	I0930 21:08:09.943619   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring network default is active
	I0930 21:08:09.944145   73256 main.go:141] libmachine: (embed-certs-256103) Ensuring network mk-embed-certs-256103 is active
	I0930 21:08:09.944659   73256 main.go:141] libmachine: (embed-certs-256103) Getting domain xml...
	I0930 21:08:09.945567   73256 main.go:141] libmachine: (embed-certs-256103) Creating domain...
	I0930 21:08:11.376075   73256 main.go:141] libmachine: (embed-certs-256103) Waiting to get IP...
	I0930 21:08:11.377049   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.377588   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.377687   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.377579   75193 retry.go:31] will retry after 219.057799ms: waiting for machine to come up
	I0930 21:08:11.598062   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.598531   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.598568   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.598491   75193 retry.go:31] will retry after 288.150233ms: waiting for machine to come up
	I0930 21:08:11.887894   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:11.888719   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:11.888749   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:11.888678   75193 retry.go:31] will retry after 422.70153ms: waiting for machine to come up
	I0930 21:08:12.313280   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:12.313761   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:12.313790   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:12.313728   75193 retry.go:31] will retry after 403.507934ms: waiting for machine to come up
	I0930 21:08:12.719305   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:12.719705   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:12.719740   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:12.719683   75193 retry.go:31] will retry after 616.261723ms: waiting for machine to come up
	I0930 21:08:13.337223   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:13.337759   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:13.337809   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:13.337727   75193 retry.go:31] will retry after 715.496762ms: waiting for machine to come up
	I0930 21:08:14.054455   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:14.055118   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:14.055155   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:14.055041   75193 retry.go:31] will retry after 1.12512788s: waiting for machine to come up
	I0930 21:08:10.970621   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:13.468795   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:11.501276   73707 node_ready.go:53] node "default-k8s-diff-port-291511" has status "Ready":"False"
	I0930 21:08:12.501748   73707 node_ready.go:49] node "default-k8s-diff-port-291511" has status "Ready":"True"
	I0930 21:08:12.501784   73707 node_ready.go:38] duration metric: took 7.005705696s for node "default-k8s-diff-port-291511" to be "Ready" ...
	I0930 21:08:12.501797   73707 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:12.510080   73707 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:12.518496   73707 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:12.518522   73707 pod_ready.go:82] duration metric: took 8.414761ms for pod "coredns-7c65d6cfc9-hdjjq" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:12.518535   73707 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.526615   73707 pod_ready.go:93] pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:14.526653   73707 pod_ready.go:82] duration metric: took 2.00810944s for pod "etcd-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.526666   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.533536   73707 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:14.533574   73707 pod_ready.go:82] duration metric: took 6.898769ms for pod "kube-apiserver-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:14.533596   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.043003   73707 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.043034   73707 pod_ready.go:82] duration metric: took 509.429109ms for pod "kube-controller-manager-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.043048   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.049645   73707 pod_ready.go:93] pod "kube-proxy-kwp22" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.049676   73707 pod_ready.go:82] duration metric: took 6.618441ms for pod "kube-proxy-kwp22" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.049688   73707 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:13.134916   73900 crio.go:462] duration metric: took 1.612498859s to copy over tarball
	I0930 21:08:13.135038   73900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:08:16.170053   73900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.034985922s)
	I0930 21:08:16.170080   73900 crio.go:469] duration metric: took 3.035125251s to extract the tarball
	I0930 21:08:16.170088   73900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:08:16.213559   73900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:16.249853   73900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 21:08:16.249876   73900 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 21:08:16.249943   73900 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.249970   73900 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.249987   73900 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.250030   73900 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0930 21:08:16.250031   73900 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.250047   73900 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.250049   73900 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.250083   73900 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0930 21:08:16.251771   73900 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.251768   73900 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:16.251750   73900 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.251832   73900 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.251854   73900 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.251891   73900 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.252031   73900 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.456847   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.468006   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0930 21:08:16.516253   73900 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0930 21:08:16.516294   73900 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.516336   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.524699   73900 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0930 21:08:16.524743   73900 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0930 21:08:16.524787   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.525738   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.529669   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.561946   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.569090   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.570589   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.571007   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.581971   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.587609   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.630323   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.711058   73900 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0930 21:08:16.711124   73900 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.711190   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.749473   73900 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0930 21:08:16.749521   73900 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.749585   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.769974   73900 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0930 21:08:16.770016   73900 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.770050   73900 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0930 21:08:16.770075   73900 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0930 21:08:16.770087   73900 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.770104   73900 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.770142   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770160   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 21:08:16.770064   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.770144   73900 ssh_runner.go:195] Run: which crictl
	I0930 21:08:16.788241   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.788292   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 21:08:16.788294   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.788339   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.847727   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0930 21:08:16.847798   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.847894   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:16.938964   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:16.939000   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:16.939053   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0930 21:08:16.939090   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:16.965556   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:16.965620   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.020497   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 21:08:17.074893   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 21:08:17.074950   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 21:08:17.090489   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0930 21:08:17.090437   73900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 21:08:17.174117   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0930 21:08:17.174183   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0930 21:08:17.185553   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0930 21:08:17.185619   73900 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0930 21:08:17.506064   73900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:08:17.650598   73900 cache_images.go:92] duration metric: took 1.400704992s to LoadCachedImages
	W0930 21:08:17.650695   73900 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-7672/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0930 21:08:17.650710   73900 kubeadm.go:934] updating node { 192.168.72.159 8443 v1.20.0 crio true true} ...
	I0930 21:08:17.650834   73900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-621406 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:08:17.650922   73900 ssh_runner.go:195] Run: crio config
	I0930 21:08:17.710096   73900 cni.go:84] Creating CNI manager for ""
	I0930 21:08:17.710124   73900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:17.710139   73900 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:08:17.710164   73900 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-621406 NodeName:old-k8s-version-621406 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0930 21:08:17.710349   73900 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-621406"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:08:17.710425   73900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0930 21:08:17.721028   73900 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:08:17.721111   73900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:08:17.731462   73900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0930 21:08:17.749715   73900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:08:15.182186   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:15.182722   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:15.182751   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:15.182673   75193 retry.go:31] will retry after 1.385891549s: waiting for machine to come up
	I0930 21:08:16.569882   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:16.570365   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:16.570386   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:16.570309   75193 retry.go:31] will retry after 1.417579481s: waiting for machine to come up
	I0930 21:08:17.989161   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:17.989876   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:17.989905   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:17.989818   75193 retry.go:31] will retry after 1.981651916s: waiting for machine to come up
	I0930 21:08:15.471221   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:17.969140   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:19.969688   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:15.300639   73707 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:15.300666   73707 pod_ready.go:82] duration metric: took 250.968899ms for pod "kube-scheduler-default-k8s-diff-port-291511" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:15.300679   73707 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:17.349449   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:19.809813   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:17.767565   73900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0930 21:08:17.786411   73900 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0930 21:08:17.790338   73900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:17.803957   73900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:17.948898   73900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:17.969102   73900 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406 for IP: 192.168.72.159
	I0930 21:08:17.969133   73900 certs.go:194] generating shared ca certs ...
	I0930 21:08:17.969150   73900 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:17.969338   73900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:08:17.969387   73900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:08:17.969400   73900 certs.go:256] generating profile certs ...
	I0930 21:08:17.969543   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/client.key
	I0930 21:08:17.969621   73900 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key.f3dc5056
	I0930 21:08:17.969674   73900 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key
	I0930 21:08:17.969833   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:08:17.969875   73900 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:08:17.969886   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:08:17.969926   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:08:17.969961   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:08:17.969999   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:08:17.970055   73900 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:17.970794   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:08:18.007954   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:08:18.041538   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:08:18.077886   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:08:18.118644   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0930 21:08:18.151418   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:08:18.199572   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:08:18.235795   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/old-k8s-version-621406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:08:18.272729   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:08:18.298727   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:08:18.324074   73900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:08:18.351209   73900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:08:18.372245   73900 ssh_runner.go:195] Run: openssl version
	I0930 21:08:18.380047   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:08:18.395332   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401407   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.401479   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:08:18.407744   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:08:18.422801   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:08:18.437946   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443864   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.443938   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:08:18.451554   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:08:18.466856   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:08:18.479324   73900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484321   73900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.484383   73900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:18.490341   73900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:08:18.503117   73900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:08:18.507986   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:08:18.514974   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:08:18.522140   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:08:18.529366   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:08:18.536056   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:08:18.542787   73900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:08:18.550311   73900 kubeadm.go:392] StartCluster: {Name:old-k8s-version-621406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-621406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:08:18.550431   73900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:08:18.550498   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.593041   73900 cri.go:89] found id: ""
	I0930 21:08:18.593116   73900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:08:18.603410   73900 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:08:18.603432   73900 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:08:18.603479   73900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:08:18.614635   73900 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:08:18.615758   73900 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-621406" does not appear in /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:08:18.616488   73900 kubeconfig.go:62] /home/jenkins/minikube-integration/19736-7672/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-621406" cluster setting kubeconfig missing "old-k8s-version-621406" context setting]
	I0930 21:08:18.617394   73900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:18.644144   73900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:08:18.655764   73900 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.159
	I0930 21:08:18.655806   73900 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:08:18.655819   73900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:08:18.655877   73900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:18.699283   73900 cri.go:89] found id: ""
	I0930 21:08:18.699376   73900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:08:18.715248   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:08:18.724905   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:08:18.724945   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:08:18.724990   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:08:18.735611   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:08:18.735682   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:08:18.745604   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:08:18.755199   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:08:18.755261   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:08:18.765450   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.775187   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:08:18.775268   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:08:18.788080   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:08:18.800668   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:08:18.800727   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:08:18.814084   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:08:18.823785   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:18.961698   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.495418   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.713653   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.812667   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:19.921314   73900 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:08:19.921414   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.422349   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:20.922222   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.422364   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:21.921493   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:22.421640   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:19.973478   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:19.973916   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:19.973946   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:19.973868   75193 retry.go:31] will retry after 2.33355272s: waiting for machine to come up
	I0930 21:08:22.308828   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:22.309471   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:22.309498   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:22.309367   75193 retry.go:31] will retry after 3.484225075s: waiting for machine to come up
	I0930 21:08:21.970954   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:24.467778   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:22.310464   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:24.806425   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:22.922418   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.421851   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:23.921502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.422346   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:24.922000   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.422290   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.922213   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.422100   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:26.922239   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:27.421729   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:25.795265   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:25.795755   73256 main.go:141] libmachine: (embed-certs-256103) DBG | unable to find current IP address of domain embed-certs-256103 in network mk-embed-certs-256103
	I0930 21:08:25.795781   73256 main.go:141] libmachine: (embed-certs-256103) DBG | I0930 21:08:25.795707   75193 retry.go:31] will retry after 2.983975719s: waiting for machine to come up
	I0930 21:08:28.780767   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.781201   73256 main.go:141] libmachine: (embed-certs-256103) Found IP for machine: 192.168.39.90
	I0930 21:08:28.781223   73256 main.go:141] libmachine: (embed-certs-256103) Reserving static IP address...
	I0930 21:08:28.781237   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has current primary IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.781655   73256 main.go:141] libmachine: (embed-certs-256103) Reserved static IP address: 192.168.39.90
	I0930 21:08:28.781679   73256 main.go:141] libmachine: (embed-certs-256103) Waiting for SSH to be available...
	I0930 21:08:28.781697   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "embed-certs-256103", mac: "52:54:00:7a:01:01", ip: "192.168.39.90"} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.781724   73256 main.go:141] libmachine: (embed-certs-256103) DBG | skip adding static IP to network mk-embed-certs-256103 - found existing host DHCP lease matching {name: "embed-certs-256103", mac: "52:54:00:7a:01:01", ip: "192.168.39.90"}
	I0930 21:08:28.781735   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Getting to WaitForSSH function...
	I0930 21:08:28.784310   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.784703   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.784737   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.784861   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Using SSH client type: external
	I0930 21:08:28.784899   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa (-rw-------)
	I0930 21:08:28.784933   73256 main.go:141] libmachine: (embed-certs-256103) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 21:08:28.784953   73256 main.go:141] libmachine: (embed-certs-256103) DBG | About to run SSH command:
	I0930 21:08:28.784970   73256 main.go:141] libmachine: (embed-certs-256103) DBG | exit 0
	I0930 21:08:28.911300   73256 main.go:141] libmachine: (embed-certs-256103) DBG | SSH cmd err, output: <nil>: 
	I0930 21:08:28.911716   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetConfigRaw
	I0930 21:08:28.912335   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:28.914861   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.915283   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.915304   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.915620   73256 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/config.json ...
	I0930 21:08:28.915874   73256 machine.go:93] provisionDockerMachine start ...
	I0930 21:08:28.915902   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:28.916117   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:28.918357   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.918661   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:28.918696   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:28.918813   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:28.918992   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:28.919143   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:28.919296   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:28.919472   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:28.919680   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:28.919691   73256 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 21:08:29.032537   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 21:08:29.032579   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.032830   73256 buildroot.go:166] provisioning hostname "embed-certs-256103"
	I0930 21:08:29.032857   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.033039   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.035951   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.036403   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.036435   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.036598   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.036795   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.037002   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.037175   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.037339   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.037538   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.037556   73256 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-256103 && echo "embed-certs-256103" | sudo tee /etc/hostname
	I0930 21:08:29.163250   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-256103
	
	I0930 21:08:29.163278   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.165937   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.166260   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.166296   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.166529   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.166722   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.166913   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.167055   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.167223   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.167454   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.167477   73256 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-256103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-256103/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-256103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 21:08:29.288197   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 21:08:29.288236   73256 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-7672/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-7672/.minikube}
	I0930 21:08:29.288292   73256 buildroot.go:174] setting up certificates
	I0930 21:08:29.288307   73256 provision.go:84] configureAuth start
	I0930 21:08:29.288322   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetMachineName
	I0930 21:08:29.288589   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:29.291598   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.292026   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.292059   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.292247   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.294760   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.295144   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.295169   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.295421   73256 provision.go:143] copyHostCerts
	I0930 21:08:29.295497   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem, removing ...
	I0930 21:08:29.295510   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem
	I0930 21:08:29.295614   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/key.pem (1675 bytes)
	I0930 21:08:29.295743   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem, removing ...
	I0930 21:08:29.295754   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem
	I0930 21:08:29.295782   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/ca.pem (1082 bytes)
	I0930 21:08:29.295855   73256 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem, removing ...
	I0930 21:08:29.295864   73256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem
	I0930 21:08:29.295886   73256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-7672/.minikube/cert.pem (1123 bytes)
	I0930 21:08:29.295948   73256 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem org=jenkins.embed-certs-256103 san=[127.0.0.1 192.168.39.90 embed-certs-256103 localhost minikube]
	I0930 21:08:26.468058   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:28.468510   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:26.808360   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:29.307500   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:29.742069   73256 provision.go:177] copyRemoteCerts
	I0930 21:08:29.742134   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 21:08:29.742156   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.745411   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.745805   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.745835   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.746023   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.746215   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.746351   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.746557   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:29.833888   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 21:08:29.857756   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0930 21:08:29.883087   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 21:08:29.905795   73256 provision.go:87] duration metric: took 617.470984ms to configureAuth
	I0930 21:08:29.905831   73256 buildroot.go:189] setting minikube options for container-runtime
	I0930 21:08:29.906028   73256 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:08:29.906098   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:29.908911   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.909307   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:29.909335   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:29.909524   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:29.909711   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.909876   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:29.909996   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:29.910157   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:29.910429   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:29.910454   73256 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 21:08:30.140191   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 21:08:30.140217   73256 machine.go:96] duration metric: took 1.224326296s to provisionDockerMachine
	I0930 21:08:30.140227   73256 start.go:293] postStartSetup for "embed-certs-256103" (driver="kvm2")
	I0930 21:08:30.140237   73256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 21:08:30.140252   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.140624   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 21:08:30.140648   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.143906   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.144300   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.144339   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.144498   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.144695   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.144846   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.145052   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.230069   73256 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 21:08:30.233845   73256 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 21:08:30.233868   73256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/addons for local assets ...
	I0930 21:08:30.233948   73256 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-7672/.minikube/files for local assets ...
	I0930 21:08:30.234050   73256 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem -> 148752.pem in /etc/ssl/certs
	I0930 21:08:30.234168   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 21:08:30.243066   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:30.266197   73256 start.go:296] duration metric: took 125.955153ms for postStartSetup
	I0930 21:08:30.266234   73256 fix.go:56] duration metric: took 20.349643145s for fixHost
	I0930 21:08:30.266252   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.269025   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.269405   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.269433   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.269576   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.269784   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.269910   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.270042   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.270176   73256 main.go:141] libmachine: Using SSH client type: native
	I0930 21:08:30.270380   73256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0930 21:08:30.270392   73256 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 21:08:30.380023   73256 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727730510.354607586
	
	I0930 21:08:30.380057   73256 fix.go:216] guest clock: 1727730510.354607586
	I0930 21:08:30.380067   73256 fix.go:229] Guest: 2024-09-30 21:08:30.354607586 +0000 UTC Remote: 2024-09-30 21:08:30.266237543 +0000 UTC m=+355.815232104 (delta=88.370043ms)
	I0930 21:08:30.380085   73256 fix.go:200] guest clock delta is within tolerance: 88.370043ms
	I0930 21:08:30.380091   73256 start.go:83] releasing machines lock for "embed-certs-256103", held for 20.463544222s
	I0930 21:08:30.380113   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.380429   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:30.382992   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.383349   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.383369   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.383518   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384071   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384245   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:08:30.384310   73256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 21:08:30.384374   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.384442   73256 ssh_runner.go:195] Run: cat /version.json
	I0930 21:08:30.384464   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:08:30.387098   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387342   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387413   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.387435   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387633   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.387762   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:30.387783   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:30.387828   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.387931   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:08:30.388003   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.388058   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:08:30.388159   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:08:30.388208   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.388347   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:08:30.510981   73256 ssh_runner.go:195] Run: systemctl --version
	I0930 21:08:30.517215   73256 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 21:08:30.663491   73256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 21:08:30.669568   73256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 21:08:30.669652   73256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 21:08:30.686640   73256 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 21:08:30.686663   73256 start.go:495] detecting cgroup driver to use...
	I0930 21:08:30.686737   73256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 21:08:30.703718   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 21:08:30.718743   73256 docker.go:217] disabling cri-docker service (if available) ...
	I0930 21:08:30.718807   73256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 21:08:30.733695   73256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 21:08:30.748690   73256 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 21:08:30.878084   73256 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 21:08:31.040955   73256 docker.go:233] disabling docker service ...
	I0930 21:08:31.041030   73256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 21:08:31.055212   73256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 21:08:31.067968   73256 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 21:08:31.185043   73256 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 21:08:31.300909   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 21:08:31.315167   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 21:08:31.333483   73256 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 21:08:31.333537   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.343599   73256 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 21:08:31.343694   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.353739   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.363993   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.375183   73256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 21:08:31.385478   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.395632   73256 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.412995   73256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 21:08:31.423277   73256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 21:08:31.433183   73256 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 21:08:31.433253   73256 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 21:08:31.446796   73256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 21:08:31.456912   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:31.571729   73256 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 21:08:31.663944   73256 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 21:08:31.664019   73256 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 21:08:31.669128   73256 start.go:563] Will wait 60s for crictl version
	I0930 21:08:31.669191   73256 ssh_runner.go:195] Run: which crictl
	I0930 21:08:31.672922   73256 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 21:08:31.709488   73256 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 21:08:31.709596   73256 ssh_runner.go:195] Run: crio --version
	I0930 21:08:31.738743   73256 ssh_runner.go:195] Run: crio --version
	I0930 21:08:31.771638   73256 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 21:08:27.922374   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:28.921870   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.421786   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:29.921804   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.421482   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:30.921969   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.422241   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.922148   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:32.421504   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:31.773186   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetIP
	I0930 21:08:31.776392   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:31.776770   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:08:31.776810   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:08:31.777016   73256 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 21:08:31.781212   73256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:31.793839   73256 kubeadm.go:883] updating cluster {Name:embed-certs-256103 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 21:08:31.793957   73256 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 21:08:31.794015   73256 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:31.834036   73256 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 21:08:31.834094   73256 ssh_runner.go:195] Run: which lz4
	I0930 21:08:31.837877   73256 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 21:08:31.842038   73256 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 21:08:31.842073   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 21:08:33.150975   73256 crio.go:462] duration metric: took 1.313131374s to copy over tarball
	I0930 21:08:33.151080   73256 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 21:08:30.469523   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:32.469562   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:34.969818   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:31.307560   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:33.308130   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:32.921516   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.421576   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:33.922082   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.421599   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:34.922178   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.422199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.922061   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.421860   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:36.921513   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.422162   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:35.294750   73256 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.143629494s)
	I0930 21:08:35.294785   73256 crio.go:469] duration metric: took 2.143777794s to extract the tarball
	I0930 21:08:35.294794   73256 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 21:08:35.340151   73256 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 21:08:35.385329   73256 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 21:08:35.385359   73256 cache_images.go:84] Images are preloaded, skipping loading
	I0930 21:08:35.385366   73256 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.31.1 crio true true} ...
	I0930 21:08:35.385463   73256 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-256103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 21:08:35.385536   73256 ssh_runner.go:195] Run: crio config
	I0930 21:08:35.433043   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:08:35.433072   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:35.433084   73256 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 21:08:35.433113   73256 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-256103 NodeName:embed-certs-256103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 21:08:35.433277   73256 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-256103"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 21:08:35.433348   73256 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 21:08:35.443627   73256 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 21:08:35.443713   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 21:08:35.453095   73256 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0930 21:08:35.469517   73256 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 21:08:35.486869   73256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0930 21:08:35.504871   73256 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0930 21:08:35.508507   73256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 21:08:35.521994   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:08:35.641971   73256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:08:35.657660   73256 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103 for IP: 192.168.39.90
	I0930 21:08:35.657686   73256 certs.go:194] generating shared ca certs ...
	I0930 21:08:35.657705   73256 certs.go:226] acquiring lock for ca certs: {Name:mka13ea8107121fad4b179e8ca898b92dd330bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:08:35.657878   73256 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key
	I0930 21:08:35.657941   73256 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key
	I0930 21:08:35.657954   73256 certs.go:256] generating profile certs ...
	I0930 21:08:35.658095   73256 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/client.key
	I0930 21:08:35.658177   73256 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.key.52e83f0c
	I0930 21:08:35.658230   73256 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.key
	I0930 21:08:35.658391   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem (1338 bytes)
	W0930 21:08:35.658431   73256 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875_empty.pem, impossibly tiny 0 bytes
	I0930 21:08:35.658443   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 21:08:35.658476   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/ca.pem (1082 bytes)
	I0930 21:08:35.658509   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/cert.pem (1123 bytes)
	I0930 21:08:35.658539   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/certs/key.pem (1675 bytes)
	I0930 21:08:35.658586   73256 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem (1708 bytes)
	I0930 21:08:35.659279   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 21:08:35.695254   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 21:08:35.718948   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 21:08:35.742442   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 21:08:35.765859   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0930 21:08:35.792019   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 21:08:35.822081   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 21:08:35.845840   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/embed-certs-256103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 21:08:35.871635   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/ssl/certs/148752.pem --> /usr/share/ca-certificates/148752.pem (1708 bytes)
	I0930 21:08:35.896069   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 21:08:35.921595   73256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-7672/.minikube/certs/14875.pem --> /usr/share/ca-certificates/14875.pem (1338 bytes)
	I0930 21:08:35.946620   73256 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 21:08:35.963340   73256 ssh_runner.go:195] Run: openssl version
	I0930 21:08:35.970540   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148752.pem && ln -fs /usr/share/ca-certificates/148752.pem /etc/ssl/certs/148752.pem"
	I0930 21:08:35.982269   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.987494   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 19:55 /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.987646   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148752.pem
	I0930 21:08:35.994312   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148752.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 21:08:36.006173   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 21:08:36.017605   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.022126   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.022190   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 21:08:36.027806   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 21:08:36.038388   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14875.pem && ln -fs /usr/share/ca-certificates/14875.pem /etc/ssl/certs/14875.pem"
	I0930 21:08:36.048818   73256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.053230   73256 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 19:55 /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.053296   73256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14875.pem
	I0930 21:08:36.058713   73256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14875.pem /etc/ssl/certs/51391683.0"
	I0930 21:08:36.070806   73256 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 21:08:36.075521   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 21:08:36.081310   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 21:08:36.086935   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 21:08:36.092990   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 21:08:36.098783   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 21:08:36.104354   73256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 21:08:36.110289   73256 kubeadm.go:392] StartCluster: {Name:embed-certs-256103 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-256103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 21:08:36.110411   73256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 21:08:36.110495   73256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:36.153770   73256 cri.go:89] found id: ""
	I0930 21:08:36.153852   73256 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 21:08:36.164301   73256 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 21:08:36.164320   73256 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 21:08:36.164363   73256 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 21:08:36.173860   73256 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 21:08:36.174950   73256 kubeconfig.go:125] found "embed-certs-256103" server: "https://192.168.39.90:8443"
	I0930 21:08:36.177584   73256 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 21:08:36.186946   73256 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I0930 21:08:36.186984   73256 kubeadm.go:1160] stopping kube-system containers ...
	I0930 21:08:36.186998   73256 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 21:08:36.187045   73256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 21:08:36.223259   73256 cri.go:89] found id: ""
	I0930 21:08:36.223328   73256 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 21:08:36.239321   73256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:08:36.248508   73256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:08:36.248528   73256 kubeadm.go:157] found existing configuration files:
	
	I0930 21:08:36.248571   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:08:36.257483   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:08:36.257537   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:08:36.266792   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:08:36.275626   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:08:36.275697   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:08:36.285000   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:08:36.293923   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:08:36.293977   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:08:36.303990   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:08:36.313104   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:08:36.313158   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:08:36.322423   73256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:08:36.332005   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:36.457666   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.309316   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.533114   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.602999   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:37.692027   73256 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:08:37.692117   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.192813   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.692777   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.192862   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:37.469941   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:39.506753   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:35.311295   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:37.806923   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:39.808338   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:37.921497   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.422360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:38.922305   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.422480   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.922279   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.422089   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.922021   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.421727   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:41.921519   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:42.422193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:39.692193   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.192178   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:40.209649   73256 api_server.go:72] duration metric: took 2.517618424s to wait for apiserver process to appear ...
	I0930 21:08:40.209676   73256 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:08:40.209699   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.034828   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:43.034857   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:43.034871   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.080073   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 21:08:43.080107   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 21:08:43.210448   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.217768   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:43.217799   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:43.710066   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:43.722379   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:43.722428   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:44.209939   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:44.219468   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 21:08:44.219500   73256 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 21:08:44.709767   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:08:44.714130   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I0930 21:08:44.720194   73256 api_server.go:141] control plane version: v1.31.1
	I0930 21:08:44.720221   73256 api_server.go:131] duration metric: took 4.510539442s to wait for apiserver health ...
	I0930 21:08:44.720230   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:08:44.720236   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:08:44.721740   73256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:08:41.968377   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:44.469477   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:41.808473   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:43.808575   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:42.922495   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.422250   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:43.922413   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.421962   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.921682   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.422144   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:45.922206   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.422020   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:46.921960   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:47.422296   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:44.722947   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:08:44.733426   73256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:08:44.750426   73256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:08:44.761259   73256 system_pods.go:59] 8 kube-system pods found
	I0930 21:08:44.761303   73256 system_pods.go:61] "coredns-7c65d6cfc9-h6cl2" [548e3751-edc9-4232-87c2-2e64769ba332] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:08:44.761314   73256 system_pods.go:61] "etcd-embed-certs-256103" [6eef2e96-d4bf-4dd6-bd5c-bfb05c306182] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 21:08:44.761326   73256 system_pods.go:61] "kube-apiserver-embed-certs-256103" [81c02a52-aca7-4b9c-b7b1-680d27f48d40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0930 21:08:44.761335   73256 system_pods.go:61] "kube-controller-manager-embed-certs-256103" [752f0966-7718-4523-8ba6-affd41bc956e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0930 21:08:44.761346   73256 system_pods.go:61] "kube-proxy-fqvg2" [284a63a1-d624-4bf3-8509-14ff0845f3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 21:08:44.761354   73256 system_pods.go:61] "kube-scheduler-embed-certs-256103" [6158a51d-82ae-490a-96d3-c0e61a3485f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0930 21:08:44.761363   73256 system_pods.go:61] "metrics-server-6867b74b74-hkp9m" [8774a772-bb72-4419-96fd-50ca5f48a5b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:08:44.761374   73256 system_pods.go:61] "storage-provisioner" [9649e71d-cd21-4846-bf66-1c5b469500ba] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 21:08:44.761385   73256 system_pods.go:74] duration metric: took 10.935916ms to wait for pod list to return data ...
	I0930 21:08:44.761397   73256 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:08:44.771745   73256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:08:44.771777   73256 node_conditions.go:123] node cpu capacity is 2
	I0930 21:08:44.771789   73256 node_conditions.go:105] duration metric: took 10.386814ms to run NodePressure ...
	I0930 21:08:44.771810   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 21:08:45.064019   73256 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 21:08:45.070479   73256 kubeadm.go:739] kubelet initialised
	I0930 21:08:45.070508   73256 kubeadm.go:740] duration metric: took 6.461143ms waiting for restarted kubelet to initialise ...
	I0930 21:08:45.070517   73256 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:08:45.074627   73256 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.080873   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.080897   73256 pod_ready.go:82] duration metric: took 6.244301ms for pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.080906   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "coredns-7c65d6cfc9-h6cl2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.080912   73256 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.086787   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "etcd-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.086818   73256 pod_ready.go:82] duration metric: took 5.898265ms for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.086829   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "etcd-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.086837   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.092860   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.092892   73256 pod_ready.go:82] duration metric: took 6.044766ms for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.092904   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.092912   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.154246   73256 pod_ready.go:98] node "embed-certs-256103" hosting pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.154271   73256 pod_ready.go:82] duration metric: took 61.348653ms for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	E0930 21:08:45.154281   73256 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-256103" hosting pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-256103" has status "Ready":"False"
	I0930 21:08:45.154287   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fqvg2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.554606   73256 pod_ready.go:93] pod "kube-proxy-fqvg2" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:45.554630   73256 pod_ready.go:82] duration metric: took 400.335084ms for pod "kube-proxy-fqvg2" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:45.554639   73256 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:47.559998   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:46.968101   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:48.968649   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:46.307946   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:48.806624   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:47.921903   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.422535   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:48.921484   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.421909   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.922117   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.421606   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:50.921728   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.421600   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:51.921716   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:52.421873   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:49.561176   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:51.562227   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:54.060692   73256 pod_ready.go:103] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:51.467375   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:53.473247   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:50.807821   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:53.307163   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:52.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:53.921496   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.421866   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.921995   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.421476   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:55.922106   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.421660   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:56.922489   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:57.422291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:54.562740   73256 pod_ready.go:93] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:08:54.562765   73256 pod_ready.go:82] duration metric: took 9.008120147s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:54.562775   73256 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace to be "Ready" ...
	I0930 21:08:56.570517   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:59.070065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:55.969724   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:58.467585   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:55.807669   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:58.305837   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:08:57.921737   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.421968   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:58.922007   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.422173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:08:59.921803   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.421596   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:00.922123   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.422186   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.921898   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:02.421894   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:01.070940   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:03.569053   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:00.469160   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.968692   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:00.308195   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.807474   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:04.808710   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:02.922329   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.421922   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:03.922360   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.421875   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:04.922544   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.421939   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:05.921693   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.422056   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.921627   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:07.422125   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:06.070166   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:08.568945   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:05.467300   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.469409   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:09.968053   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.306237   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:09.306644   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:07.921687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.421694   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:08.922234   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.421817   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:09.921704   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.422030   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.921597   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.421700   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:11.922301   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:12.421567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:10.569444   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:13.069582   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:11.970180   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:14.469440   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:11.307287   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:13.307376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:12.922171   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.422423   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:13.921941   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.422494   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:14.922454   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.421776   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.922567   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.421713   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:16.922449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:17.421644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:15.569398   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:18.069177   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:16.968663   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:19.468171   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:15.808689   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:18.307774   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:17.922098   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.421993   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:18.922084   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.421717   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:19.922095   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:19.922178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:19.962975   73900 cri.go:89] found id: ""
	I0930 21:09:19.963002   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.963014   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:19.963020   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:19.963073   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:19.999741   73900 cri.go:89] found id: ""
	I0930 21:09:19.999769   73900 logs.go:276] 0 containers: []
	W0930 21:09:19.999777   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:19.999782   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:19.999840   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:20.035818   73900 cri.go:89] found id: ""
	I0930 21:09:20.035844   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.035856   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:20.035863   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:20.035924   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:20.072005   73900 cri.go:89] found id: ""
	I0930 21:09:20.072032   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.072042   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:20.072048   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:20.072110   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:20.108229   73900 cri.go:89] found id: ""
	I0930 21:09:20.108258   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.108314   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:20.108325   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:20.108383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:20.141331   73900 cri.go:89] found id: ""
	I0930 21:09:20.141388   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.141398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:20.141406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:20.141466   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:20.175133   73900 cri.go:89] found id: ""
	I0930 21:09:20.175161   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.175169   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:20.175175   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:20.175223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:20.210529   73900 cri.go:89] found id: ""
	I0930 21:09:20.210566   73900 logs.go:276] 0 containers: []
	W0930 21:09:20.210578   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:20.210594   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:20.210608   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:20.261055   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:20.261095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:20.274212   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:20.274239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:20.406215   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:20.406246   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:20.406282   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:20.481758   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:20.481794   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:20.069672   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:22.569421   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:21.468616   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:23.468820   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:20.309317   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:22.807149   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:24.807293   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:23.019687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:23.033394   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:23.033450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:23.078558   73900 cri.go:89] found id: ""
	I0930 21:09:23.078592   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.078604   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:23.078611   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:23.078673   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:23.117833   73900 cri.go:89] found id: ""
	I0930 21:09:23.117860   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.117868   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:23.117875   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:23.117931   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:23.157299   73900 cri.go:89] found id: ""
	I0930 21:09:23.157337   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.157359   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:23.157367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:23.157438   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:23.196545   73900 cri.go:89] found id: ""
	I0930 21:09:23.196570   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.196579   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:23.196586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:23.196644   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:23.229359   73900 cri.go:89] found id: ""
	I0930 21:09:23.229390   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.229401   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:23.229409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:23.229471   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:23.264847   73900 cri.go:89] found id: ""
	I0930 21:09:23.264881   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.264893   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:23.264900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:23.264962   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:23.298657   73900 cri.go:89] found id: ""
	I0930 21:09:23.298687   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.298695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:23.298701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:23.298750   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:23.333787   73900 cri.go:89] found id: ""
	I0930 21:09:23.333816   73900 logs.go:276] 0 containers: []
	W0930 21:09:23.333826   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:23.333836   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:23.333851   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:23.386311   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:23.386347   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:23.400096   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:23.400129   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:23.481724   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:23.481748   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:23.481780   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:23.561080   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:23.561119   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.122460   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:26.136409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:26.136495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:26.170785   73900 cri.go:89] found id: ""
	I0930 21:09:26.170818   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.170832   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:26.170866   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:26.170945   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:26.205211   73900 cri.go:89] found id: ""
	I0930 21:09:26.205265   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.205275   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:26.205281   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:26.205335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:26.239242   73900 cri.go:89] found id: ""
	I0930 21:09:26.239276   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.239285   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:26.239291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:26.239337   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:26.272908   73900 cri.go:89] found id: ""
	I0930 21:09:26.272932   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.272940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:26.272946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:26.272993   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:26.311599   73900 cri.go:89] found id: ""
	I0930 21:09:26.311625   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.311632   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:26.311639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:26.311684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:26.345719   73900 cri.go:89] found id: ""
	I0930 21:09:26.345746   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.345754   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:26.345760   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:26.345816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:26.383513   73900 cri.go:89] found id: ""
	I0930 21:09:26.383562   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.383572   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:26.383578   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:26.383637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:26.418533   73900 cri.go:89] found id: ""
	I0930 21:09:26.418565   73900 logs.go:276] 0 containers: []
	W0930 21:09:26.418574   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:26.418584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:26.418594   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:26.456635   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:26.456660   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:26.507639   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:26.507686   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:26.521069   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:26.521095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:26.594745   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:26.594768   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:26.594781   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:24.569626   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:26.570133   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.069071   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:25.968851   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:27.974091   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:26.808336   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.308328   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:29.180142   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:29.194730   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:29.194785   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:29.234054   73900 cri.go:89] found id: ""
	I0930 21:09:29.234094   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.234103   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:29.234109   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:29.234156   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:29.280869   73900 cri.go:89] found id: ""
	I0930 21:09:29.280896   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.280907   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:29.280914   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:29.280988   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:29.348376   73900 cri.go:89] found id: ""
	I0930 21:09:29.348406   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.348417   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:29.348424   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:29.348491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:29.404218   73900 cri.go:89] found id: ""
	I0930 21:09:29.404251   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.404261   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:29.404268   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:29.404344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:29.449029   73900 cri.go:89] found id: ""
	I0930 21:09:29.449053   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.449061   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:29.449066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:29.449127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:29.484917   73900 cri.go:89] found id: ""
	I0930 21:09:29.484939   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.484948   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:29.484954   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:29.485002   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:29.517150   73900 cri.go:89] found id: ""
	I0930 21:09:29.517177   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.517185   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:29.517191   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:29.517259   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:29.550410   73900 cri.go:89] found id: ""
	I0930 21:09:29.550443   73900 logs.go:276] 0 containers: []
	W0930 21:09:29.550452   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:29.550461   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:29.550472   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:29.601757   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:29.601803   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:29.616266   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:29.616299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:29.686206   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:29.686228   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:29.686240   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:29.761765   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:29.761810   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:32.299199   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:32.315047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:32.315125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:32.349784   73900 cri.go:89] found id: ""
	I0930 21:09:32.349810   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.349819   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:32.349824   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:32.349871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:32.385887   73900 cri.go:89] found id: ""
	I0930 21:09:32.385916   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.385927   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:32.385935   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:32.385994   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:32.421746   73900 cri.go:89] found id: ""
	I0930 21:09:32.421776   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.421789   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:32.421796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:32.421856   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:32.459361   73900 cri.go:89] found id: ""
	I0930 21:09:32.459391   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.459404   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:32.459411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:32.459470   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:32.495919   73900 cri.go:89] found id: ""
	I0930 21:09:32.495947   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.495960   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:32.495966   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:32.496025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:32.533626   73900 cri.go:89] found id: ""
	I0930 21:09:32.533652   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.533663   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:32.533670   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:32.533729   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:32.567577   73900 cri.go:89] found id: ""
	I0930 21:09:32.567610   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.567623   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:32.567630   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:32.567687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:32.604949   73900 cri.go:89] found id: ""
	I0930 21:09:32.604981   73900 logs.go:276] 0 containers: []
	W0930 21:09:32.604991   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:32.605001   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:32.605014   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:32.656781   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:32.656822   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:32.670116   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:32.670144   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:32.736712   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:32.736736   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:32.736751   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:31.070228   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:33.569488   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:30.469162   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:32.469874   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:34.967596   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:31.807682   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:33.807723   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:32.813502   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:32.813556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:35.354372   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:35.369226   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:35.369303   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:35.408374   73900 cri.go:89] found id: ""
	I0930 21:09:35.408402   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.408414   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:35.408421   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:35.408481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:35.442390   73900 cri.go:89] found id: ""
	I0930 21:09:35.442432   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.442440   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:35.442445   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:35.442524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:35.479624   73900 cri.go:89] found id: ""
	I0930 21:09:35.479651   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.479659   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:35.479664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:35.479711   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:35.518580   73900 cri.go:89] found id: ""
	I0930 21:09:35.518609   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.518617   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:35.518623   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:35.518675   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:35.553547   73900 cri.go:89] found id: ""
	I0930 21:09:35.553582   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.553590   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:35.553604   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:35.553669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:35.596444   73900 cri.go:89] found id: ""
	I0930 21:09:35.596476   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.596487   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:35.596495   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:35.596583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:35.634232   73900 cri.go:89] found id: ""
	I0930 21:09:35.634259   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.634268   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:35.634274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:35.634322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:35.669637   73900 cri.go:89] found id: ""
	I0930 21:09:35.669672   73900 logs.go:276] 0 containers: []
	W0930 21:09:35.669683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:35.669694   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:35.669706   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:35.719433   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:35.719469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:35.733383   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:35.733415   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:35.811860   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:35.811887   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:35.811913   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:35.896206   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:35.896272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:35.569694   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:37.570548   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:36.968789   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.968959   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:35.814006   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.306676   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:38.435999   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:38.450091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:38.450152   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:38.489127   73900 cri.go:89] found id: ""
	I0930 21:09:38.489153   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.489161   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:38.489166   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:38.489221   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:38.520760   73900 cri.go:89] found id: ""
	I0930 21:09:38.520783   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.520792   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:38.520798   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:38.520847   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:38.556279   73900 cri.go:89] found id: ""
	I0930 21:09:38.556306   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.556315   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:38.556319   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:38.556379   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:38.590804   73900 cri.go:89] found id: ""
	I0930 21:09:38.590827   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.590834   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:38.590840   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:38.590906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:38.624765   73900 cri.go:89] found id: ""
	I0930 21:09:38.624792   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.624800   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:38.624805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:38.624857   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:38.660587   73900 cri.go:89] found id: ""
	I0930 21:09:38.660614   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.660625   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:38.660635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:38.660702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:38.693314   73900 cri.go:89] found id: ""
	I0930 21:09:38.693352   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.693362   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:38.693371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:38.693441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:38.729163   73900 cri.go:89] found id: ""
	I0930 21:09:38.729197   73900 logs.go:276] 0 containers: []
	W0930 21:09:38.729212   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:38.729223   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:38.729235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:38.780787   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:38.780828   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:38.794983   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:38.795009   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:38.861886   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:38.861911   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:38.861926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:38.936958   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:38.936994   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:41.479891   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:41.493041   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:41.493106   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:41.528855   73900 cri.go:89] found id: ""
	I0930 21:09:41.528889   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.528900   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:41.528906   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:41.528967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:41.565193   73900 cri.go:89] found id: ""
	I0930 21:09:41.565216   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.565224   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:41.565230   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:41.565289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:41.599503   73900 cri.go:89] found id: ""
	I0930 21:09:41.599538   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.599547   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:41.599553   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:41.599611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:41.636623   73900 cri.go:89] found id: ""
	I0930 21:09:41.636651   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.636663   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:41.636671   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:41.636728   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:41.671727   73900 cri.go:89] found id: ""
	I0930 21:09:41.671753   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.671760   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:41.671765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:41.671819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:41.705499   73900 cri.go:89] found id: ""
	I0930 21:09:41.705533   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.705543   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:41.705549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:41.705602   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:41.738262   73900 cri.go:89] found id: ""
	I0930 21:09:41.738285   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.738292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:41.738297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:41.738351   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:41.774232   73900 cri.go:89] found id: ""
	I0930 21:09:41.774261   73900 logs.go:276] 0 containers: []
	W0930 21:09:41.774269   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:41.774277   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:41.774288   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:41.826060   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:41.826093   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:41.839308   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:41.839335   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:41.908599   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:41.908626   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:41.908640   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:41.986337   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:41.986375   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:40.069900   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:42.070035   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:41.469908   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:43.968111   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:40.307200   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:42.308356   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:44.807663   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:44.527015   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:44.539973   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:44.540036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:44.575985   73900 cri.go:89] found id: ""
	I0930 21:09:44.576012   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.576021   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:44.576027   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:44.576076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:44.612693   73900 cri.go:89] found id: ""
	I0930 21:09:44.612724   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.612736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:44.612743   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:44.612809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:44.646515   73900 cri.go:89] found id: ""
	I0930 21:09:44.646544   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.646555   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:44.646562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:44.646623   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:44.679980   73900 cri.go:89] found id: ""
	I0930 21:09:44.680011   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.680022   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:44.680030   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:44.680089   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:44.714078   73900 cri.go:89] found id: ""
	I0930 21:09:44.714117   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.714128   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:44.714135   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:44.714193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:44.748491   73900 cri.go:89] found id: ""
	I0930 21:09:44.748521   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.748531   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:44.748539   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:44.748618   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:44.780902   73900 cri.go:89] found id: ""
	I0930 21:09:44.780936   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.780947   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:44.780955   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:44.781013   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:44.817944   73900 cri.go:89] found id: ""
	I0930 21:09:44.817999   73900 logs.go:276] 0 containers: []
	W0930 21:09:44.818011   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:44.818022   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:44.818038   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:44.873896   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:44.873926   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:44.887829   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:44.887858   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:44.957562   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:44.957584   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:44.957598   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:45.037892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:45.037934   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:47.583013   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:47.595799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:47.595870   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:47.630348   73900 cri.go:89] found id: ""
	I0930 21:09:47.630377   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.630385   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:47.630391   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:47.630444   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:47.663416   73900 cri.go:89] found id: ""
	I0930 21:09:47.663440   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.663448   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:47.663454   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:47.663500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:47.700145   73900 cri.go:89] found id: ""
	I0930 21:09:47.700174   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.700184   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:47.700192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:47.700253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:47.732539   73900 cri.go:89] found id: ""
	I0930 21:09:47.732567   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.732577   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:47.732583   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:47.732637   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:44.569951   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:46.570501   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:48.574018   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:45.971063   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:48.468661   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:47.307709   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:49.806843   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:47.764470   73900 cri.go:89] found id: ""
	I0930 21:09:47.764493   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.764501   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:47.764507   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:47.764553   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:47.802365   73900 cri.go:89] found id: ""
	I0930 21:09:47.802393   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.802403   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:47.802411   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:47.802468   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:47.836504   73900 cri.go:89] found id: ""
	I0930 21:09:47.836531   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.836542   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:47.836549   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:47.836611   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:47.870315   73900 cri.go:89] found id: ""
	I0930 21:09:47.870338   73900 logs.go:276] 0 containers: []
	W0930 21:09:47.870351   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:47.870359   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:47.870370   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:47.919974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:47.920011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:47.934157   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:47.934190   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:48.003046   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:48.003072   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:48.003085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:48.084947   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:48.084985   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:50.624791   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:50.638118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:50.638196   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:50.672448   73900 cri.go:89] found id: ""
	I0930 21:09:50.672479   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.672488   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:50.672503   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:50.672557   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:50.706057   73900 cri.go:89] found id: ""
	I0930 21:09:50.706080   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.706088   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:50.706093   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:50.706142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:50.738101   73900 cri.go:89] found id: ""
	I0930 21:09:50.738126   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.738134   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:50.738140   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:50.738207   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:50.772483   73900 cri.go:89] found id: ""
	I0930 21:09:50.772508   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.772516   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:50.772522   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:50.772581   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:50.805169   73900 cri.go:89] found id: ""
	I0930 21:09:50.805200   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.805211   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:50.805220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:50.805276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:50.842144   73900 cri.go:89] found id: ""
	I0930 21:09:50.842168   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.842176   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:50.842182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:50.842236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:50.875512   73900 cri.go:89] found id: ""
	I0930 21:09:50.875563   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.875575   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:50.875582   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:50.875643   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:50.909549   73900 cri.go:89] found id: ""
	I0930 21:09:50.909580   73900 logs.go:276] 0 containers: []
	W0930 21:09:50.909591   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:50.909599   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:50.909610   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:50.962064   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:50.962098   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:50.976979   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:50.977012   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:51.053784   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:51.053815   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:51.053833   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:51.130939   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:51.130975   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:51.069919   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:53.568708   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:50.468737   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:52.968935   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:52.306733   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:54.306875   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:53.667675   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:53.680381   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:53.680449   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:53.712759   73900 cri.go:89] found id: ""
	I0930 21:09:53.712791   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.712800   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:53.712807   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:53.712871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:53.748958   73900 cri.go:89] found id: ""
	I0930 21:09:53.748990   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.749002   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:53.749009   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:53.749078   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:53.783243   73900 cri.go:89] found id: ""
	I0930 21:09:53.783272   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.783282   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:53.783289   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:53.783382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:53.823848   73900 cri.go:89] found id: ""
	I0930 21:09:53.823875   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.823883   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:53.823890   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:53.823941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:53.865607   73900 cri.go:89] found id: ""
	I0930 21:09:53.865635   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.865643   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:53.865648   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:53.865693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:53.900888   73900 cri.go:89] found id: ""
	I0930 21:09:53.900912   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.900920   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:53.900926   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:53.900985   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:53.933688   73900 cri.go:89] found id: ""
	I0930 21:09:53.933717   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.933728   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:53.933736   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:53.933798   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:53.968702   73900 cri.go:89] found id: ""
	I0930 21:09:53.968731   73900 logs.go:276] 0 containers: []
	W0930 21:09:53.968740   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:53.968749   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:53.968760   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:54.021588   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:54.021626   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:54.036681   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:54.036719   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:54.112189   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:54.112209   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:54.112223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:54.185028   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:54.185085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:56.725146   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:56.739358   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:56.739421   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:56.779278   73900 cri.go:89] found id: ""
	I0930 21:09:56.779313   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.779322   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:56.779329   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:56.779377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:56.815972   73900 cri.go:89] found id: ""
	I0930 21:09:56.816000   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.816011   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:56.816018   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:56.816084   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:56.849425   73900 cri.go:89] found id: ""
	I0930 21:09:56.849458   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.849471   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:56.849478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:56.849542   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:56.885483   73900 cri.go:89] found id: ""
	I0930 21:09:56.885510   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.885520   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:56.885527   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:56.885586   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:56.917832   73900 cri.go:89] found id: ""
	I0930 21:09:56.917862   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.917872   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:56.917879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:56.917932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:09:56.951613   73900 cri.go:89] found id: ""
	I0930 21:09:56.951643   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.951654   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:09:56.951664   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:09:56.951726   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:09:56.987577   73900 cri.go:89] found id: ""
	I0930 21:09:56.987608   73900 logs.go:276] 0 containers: []
	W0930 21:09:56.987620   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:09:56.987628   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:09:56.987691   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:09:57.024871   73900 cri.go:89] found id: ""
	I0930 21:09:57.024903   73900 logs.go:276] 0 containers: []
	W0930 21:09:57.024912   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:09:57.024920   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:09:57.024935   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:09:57.038279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:09:57.038309   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:09:57.111955   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:09:57.111985   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:09:57.111998   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:09:57.193719   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:09:57.193755   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:09:57.230058   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:09:57.230085   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:09:55.568928   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:58.069462   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:55.467583   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:57.968380   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:59.969131   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:56.807753   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:58.808055   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:09:59.780762   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:09:59.794210   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:09:59.794277   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:09:59.828258   73900 cri.go:89] found id: ""
	I0930 21:09:59.828287   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.828298   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:09:59.828306   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:09:59.828369   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:09:59.868295   73900 cri.go:89] found id: ""
	I0930 21:09:59.868331   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.868353   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:09:59.868363   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:09:59.868437   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:09:59.900298   73900 cri.go:89] found id: ""
	I0930 21:09:59.900326   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.900337   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:09:59.900343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:09:59.900403   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:09:59.934081   73900 cri.go:89] found id: ""
	I0930 21:09:59.934108   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.934120   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:09:59.934127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:09:59.934183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:09:59.970564   73900 cri.go:89] found id: ""
	I0930 21:09:59.970592   73900 logs.go:276] 0 containers: []
	W0930 21:09:59.970600   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:09:59.970605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:09:59.970652   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:00.006215   73900 cri.go:89] found id: ""
	I0930 21:10:00.006249   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.006259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:00.006270   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:00.006348   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:00.040106   73900 cri.go:89] found id: ""
	I0930 21:10:00.040135   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.040144   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:00.040150   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:00.040202   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:00.079310   73900 cri.go:89] found id: ""
	I0930 21:10:00.079345   73900 logs.go:276] 0 containers: []
	W0930 21:10:00.079354   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:00.079365   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:00.079378   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:00.161243   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:00.161284   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:00.198911   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:00.198941   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:00.247697   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:00.247735   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:00.260905   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:00.260933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:00.332502   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:00.569218   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.569371   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.468439   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:04.968585   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:00.808753   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:03.306574   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:02.833204   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:02.846807   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:02.846893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:02.882386   73900 cri.go:89] found id: ""
	I0930 21:10:02.882420   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.882431   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:02.882439   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:02.882504   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:02.918589   73900 cri.go:89] found id: ""
	I0930 21:10:02.918617   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.918633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:02.918642   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:02.918722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:02.952758   73900 cri.go:89] found id: ""
	I0930 21:10:02.952789   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.952799   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:02.952806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:02.952871   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:02.991406   73900 cri.go:89] found id: ""
	I0930 21:10:02.991439   73900 logs.go:276] 0 containers: []
	W0930 21:10:02.991448   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:02.991454   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:02.991511   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:03.030075   73900 cri.go:89] found id: ""
	I0930 21:10:03.030104   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.030112   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:03.030121   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:03.030172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:03.063630   73900 cri.go:89] found id: ""
	I0930 21:10:03.063654   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.063662   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:03.063668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:03.063718   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:03.098607   73900 cri.go:89] found id: ""
	I0930 21:10:03.098636   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.098644   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:03.098649   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:03.098702   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:03.133161   73900 cri.go:89] found id: ""
	I0930 21:10:03.133189   73900 logs.go:276] 0 containers: []
	W0930 21:10:03.133198   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:03.133206   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:03.133217   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:03.211046   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:03.211083   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:03.252585   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:03.252615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:03.307019   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:03.307049   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:03.320781   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:03.320811   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:03.408645   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:05.909638   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:05.922674   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:05.922744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:05.955264   73900 cri.go:89] found id: ""
	I0930 21:10:05.955305   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.955318   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:05.955326   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:05.955378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:05.991055   73900 cri.go:89] found id: ""
	I0930 21:10:05.991100   73900 logs.go:276] 0 containers: []
	W0930 21:10:05.991122   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:05.991130   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:05.991194   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:06.025725   73900 cri.go:89] found id: ""
	I0930 21:10:06.025755   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.025766   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:06.025773   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:06.025832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:06.067700   73900 cri.go:89] found id: ""
	I0930 21:10:06.067726   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.067736   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:06.067743   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:06.067801   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:06.102729   73900 cri.go:89] found id: ""
	I0930 21:10:06.102760   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.102771   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:06.102784   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:06.102845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:06.137120   73900 cri.go:89] found id: ""
	I0930 21:10:06.137148   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.137159   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:06.137164   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:06.137215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:06.169985   73900 cri.go:89] found id: ""
	I0930 21:10:06.170014   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.170023   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:06.170029   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:06.170082   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:06.206928   73900 cri.go:89] found id: ""
	I0930 21:10:06.206951   73900 logs.go:276] 0 containers: []
	W0930 21:10:06.206959   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:06.206967   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:06.206977   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:06.258835   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:06.258870   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:06.273527   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:06.273556   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:06.351335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:06.351359   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:06.351373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:06.423412   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:06.423450   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:04.569756   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:07.069437   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:09.074024   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:06.969500   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:09.471298   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:05.807932   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:08.306749   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:08.968986   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:08.984075   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:08.984139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:09.016815   73900 cri.go:89] found id: ""
	I0930 21:10:09.016847   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.016858   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:09.016864   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:09.016928   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:09.051603   73900 cri.go:89] found id: ""
	I0930 21:10:09.051626   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.051633   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:09.051639   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:09.051693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:09.088820   73900 cri.go:89] found id: ""
	I0930 21:10:09.088856   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.088870   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:09.088884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:09.088949   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:09.124032   73900 cri.go:89] found id: ""
	I0930 21:10:09.124064   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.124076   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:09.124083   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:09.124140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:09.177129   73900 cri.go:89] found id: ""
	I0930 21:10:09.177161   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.177172   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:09.177178   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:09.177228   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:09.211490   73900 cri.go:89] found id: ""
	I0930 21:10:09.211513   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.211521   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:09.211540   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:09.211605   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:09.252187   73900 cri.go:89] found id: ""
	I0930 21:10:09.252211   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.252221   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:09.252229   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:09.252289   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:09.286970   73900 cri.go:89] found id: ""
	I0930 21:10:09.287004   73900 logs.go:276] 0 containers: []
	W0930 21:10:09.287012   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:09.287020   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:09.287031   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:09.369387   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:09.369410   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:09.369422   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:09.450685   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:09.450733   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:09.491302   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:09.491331   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:09.540183   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:09.540219   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.054793   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:12.068635   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:12.068717   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:12.103118   73900 cri.go:89] found id: ""
	I0930 21:10:12.103140   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.103149   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:12.103154   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:12.103219   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:12.137992   73900 cri.go:89] found id: ""
	I0930 21:10:12.138020   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.138031   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:12.138040   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:12.138103   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:12.175559   73900 cri.go:89] found id: ""
	I0930 21:10:12.175591   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.175609   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:12.175616   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:12.175678   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:12.209630   73900 cri.go:89] found id: ""
	I0930 21:10:12.209655   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.209666   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:12.209672   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:12.209735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:12.245844   73900 cri.go:89] found id: ""
	I0930 21:10:12.245879   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.245891   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:12.245901   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:12.245961   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:12.280385   73900 cri.go:89] found id: ""
	I0930 21:10:12.280412   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.280420   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:12.280426   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:12.280484   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:12.315424   73900 cri.go:89] found id: ""
	I0930 21:10:12.315453   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.315463   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:12.315473   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:12.315566   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:12.349223   73900 cri.go:89] found id: ""
	I0930 21:10:12.349251   73900 logs.go:276] 0 containers: []
	W0930 21:10:12.349270   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:12.349279   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:12.349291   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:12.362360   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:12.362397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:12.432060   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:12.432084   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:12.432101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:12.506059   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:12.506096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:12.541319   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:12.541348   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:11.568740   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:13.569690   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:11.968234   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:13.968634   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:10.306903   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:12.307072   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:14.807562   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:15.098852   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:15.111919   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:15.112001   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:15.149174   73900 cri.go:89] found id: ""
	I0930 21:10:15.149206   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.149216   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:15.149223   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:15.149286   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:15.187283   73900 cri.go:89] found id: ""
	I0930 21:10:15.187316   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.187326   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:15.187333   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:15.187392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:15.223896   73900 cri.go:89] found id: ""
	I0930 21:10:15.223922   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.223933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:15.223940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:15.224000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:15.260530   73900 cri.go:89] found id: ""
	I0930 21:10:15.260559   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.260567   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:15.260573   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:15.260634   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:15.296319   73900 cri.go:89] found id: ""
	I0930 21:10:15.296346   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.296357   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:15.296363   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:15.296425   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:15.333785   73900 cri.go:89] found id: ""
	I0930 21:10:15.333830   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.333843   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:15.333856   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:15.333932   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:15.368235   73900 cri.go:89] found id: ""
	I0930 21:10:15.368268   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.368280   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:15.368288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:15.368354   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:15.408155   73900 cri.go:89] found id: ""
	I0930 21:10:15.408184   73900 logs.go:276] 0 containers: []
	W0930 21:10:15.408192   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:15.408200   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:15.408210   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:15.462018   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:15.462058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:15.477345   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:15.477376   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:15.558398   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:15.558423   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:15.558442   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:15.662269   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:15.662311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:15.569988   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.069056   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:16.467859   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.468764   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:17.307469   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:19.809316   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:18.199477   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:18.213235   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:18.213320   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:18.250379   73900 cri.go:89] found id: ""
	I0930 21:10:18.250409   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.250418   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:18.250424   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:18.250515   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:18.283381   73900 cri.go:89] found id: ""
	I0930 21:10:18.283407   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.283416   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:18.283422   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:18.283482   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:18.321601   73900 cri.go:89] found id: ""
	I0930 21:10:18.321635   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.321646   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:18.321659   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:18.321720   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:18.354210   73900 cri.go:89] found id: ""
	I0930 21:10:18.354242   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.354254   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:18.354262   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:18.354330   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:18.391982   73900 cri.go:89] found id: ""
	I0930 21:10:18.392019   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.392029   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:18.392035   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:18.392150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:18.428826   73900 cri.go:89] found id: ""
	I0930 21:10:18.428851   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.428862   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:18.428870   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:18.428927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:18.465841   73900 cri.go:89] found id: ""
	I0930 21:10:18.465868   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.465878   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:18.465887   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:18.465934   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:18.502747   73900 cri.go:89] found id: ""
	I0930 21:10:18.502775   73900 logs.go:276] 0 containers: []
	W0930 21:10:18.502783   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:18.502793   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:18.502807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:18.558025   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:18.558064   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:18.572356   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:18.572383   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:18.642994   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:18.643020   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:18.643033   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:18.722804   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:18.722845   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.262790   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:21.276427   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:21.276510   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:21.323245   73900 cri.go:89] found id: ""
	I0930 21:10:21.323274   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.323284   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:21.323291   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:21.323377   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:21.381684   73900 cri.go:89] found id: ""
	I0930 21:10:21.381725   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.381736   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:21.381744   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:21.381813   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:21.428818   73900 cri.go:89] found id: ""
	I0930 21:10:21.428841   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.428849   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:21.428854   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:21.428901   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:21.462906   73900 cri.go:89] found id: ""
	I0930 21:10:21.462935   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.462944   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:21.462949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:21.462995   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:21.502417   73900 cri.go:89] found id: ""
	I0930 21:10:21.502452   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.502464   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:21.502471   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:21.502535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:21.540004   73900 cri.go:89] found id: ""
	I0930 21:10:21.540037   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.540048   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:21.540056   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:21.540105   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:21.574898   73900 cri.go:89] found id: ""
	I0930 21:10:21.574929   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.574937   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:21.574942   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:21.574999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:21.609438   73900 cri.go:89] found id: ""
	I0930 21:10:21.609465   73900 logs.go:276] 0 containers: []
	W0930 21:10:21.609473   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:21.609496   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:21.609524   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:21.646651   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:21.646679   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:21.702406   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:21.702451   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:21.716226   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:21.716260   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:21.790089   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:21.790115   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:21.790128   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:20.070823   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.568856   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:20.968069   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.968208   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:22.307376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:24.808780   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:24.368291   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:24.381517   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:24.381588   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:24.416535   73900 cri.go:89] found id: ""
	I0930 21:10:24.416559   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.416570   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:24.416577   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:24.416635   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:24.454444   73900 cri.go:89] found id: ""
	I0930 21:10:24.454472   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.454480   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:24.454485   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:24.454537   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:24.492334   73900 cri.go:89] found id: ""
	I0930 21:10:24.492359   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.492367   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:24.492373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:24.492419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:24.527590   73900 cri.go:89] found id: ""
	I0930 21:10:24.527622   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.527633   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:24.527642   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:24.527708   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:24.564819   73900 cri.go:89] found id: ""
	I0930 21:10:24.564844   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.564853   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:24.564858   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:24.564915   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:24.599367   73900 cri.go:89] found id: ""
	I0930 21:10:24.599390   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.599398   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:24.599403   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:24.599450   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:24.636738   73900 cri.go:89] found id: ""
	I0930 21:10:24.636767   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.636778   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:24.636785   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:24.636845   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:24.669607   73900 cri.go:89] found id: ""
	I0930 21:10:24.669640   73900 logs.go:276] 0 containers: []
	W0930 21:10:24.669651   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:24.669663   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:24.669680   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:24.722662   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:24.722696   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:24.736150   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:24.736179   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:24.812022   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:24.812053   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:24.812069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.891291   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:24.891330   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.430595   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:27.443990   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:27.444054   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:27.480204   73900 cri.go:89] found id: ""
	I0930 21:10:27.480230   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.480237   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:27.480243   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:27.480297   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:27.516959   73900 cri.go:89] found id: ""
	I0930 21:10:27.516982   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.516989   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:27.516995   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:27.517041   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:27.549717   73900 cri.go:89] found id: ""
	I0930 21:10:27.549745   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.549758   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:27.549769   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:27.549821   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:27.584512   73900 cri.go:89] found id: ""
	I0930 21:10:27.584539   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.584549   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:27.584560   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:27.584619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:27.623551   73900 cri.go:89] found id: ""
	I0930 21:10:27.623586   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.623603   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:27.623612   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:27.623679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:27.662453   73900 cri.go:89] found id: ""
	I0930 21:10:27.662478   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.662486   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:27.662493   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:27.662554   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:27.695665   73900 cri.go:89] found id: ""
	I0930 21:10:27.695693   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.695701   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:27.695707   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:27.695765   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:27.729090   73900 cri.go:89] found id: ""
	I0930 21:10:27.729129   73900 logs.go:276] 0 containers: []
	W0930 21:10:27.729137   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:27.729146   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:27.729155   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:24.570129   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:26.572751   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.069340   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:25.468598   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.469443   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.970417   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.307766   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:29.806538   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:27.816186   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:27.816230   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:27.854451   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:27.854485   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:27.905674   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:27.905709   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:27.918889   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:27.918917   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:27.989739   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.490514   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:30.502735   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:30.502810   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:30.535874   73900 cri.go:89] found id: ""
	I0930 21:10:30.535902   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.535914   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:30.535922   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:30.535989   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:30.570603   73900 cri.go:89] found id: ""
	I0930 21:10:30.570627   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.570634   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:30.570643   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:30.570689   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:30.605225   73900 cri.go:89] found id: ""
	I0930 21:10:30.605255   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.605266   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:30.605273   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:30.605333   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:30.640810   73900 cri.go:89] found id: ""
	I0930 21:10:30.640839   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.640849   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:30.640857   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:30.640914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:30.673101   73900 cri.go:89] found id: ""
	I0930 21:10:30.673129   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.673137   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:30.673142   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:30.673189   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:30.704332   73900 cri.go:89] found id: ""
	I0930 21:10:30.704356   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.704366   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:30.704373   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:30.704440   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:30.738463   73900 cri.go:89] found id: ""
	I0930 21:10:30.738494   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.738506   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:30.738516   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:30.738579   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:30.772115   73900 cri.go:89] found id: ""
	I0930 21:10:30.772153   73900 logs.go:276] 0 containers: []
	W0930 21:10:30.772164   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:30.772175   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:30.772193   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:30.850683   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:30.850707   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:30.850720   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:30.930674   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:30.930718   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:30.975781   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:30.975819   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:31.030566   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:31.030613   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:31.070216   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.568935   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:32.468224   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:34.968557   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:31.807408   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.807669   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:33.544354   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:33.557613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:33.557692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:33.594372   73900 cri.go:89] found id: ""
	I0930 21:10:33.594394   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.594401   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:33.594406   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:33.594455   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:33.632026   73900 cri.go:89] found id: ""
	I0930 21:10:33.632048   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.632056   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:33.632061   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:33.632113   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:33.666168   73900 cri.go:89] found id: ""
	I0930 21:10:33.666201   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.666213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:33.666219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:33.666269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:33.697772   73900 cri.go:89] found id: ""
	I0930 21:10:33.697801   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.697810   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:33.697816   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:33.697864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:33.732821   73900 cri.go:89] found id: ""
	I0930 21:10:33.732851   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.732862   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:33.732869   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:33.732952   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:33.770646   73900 cri.go:89] found id: ""
	I0930 21:10:33.770682   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.770693   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:33.770701   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:33.770756   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:33.804803   73900 cri.go:89] found id: ""
	I0930 21:10:33.804831   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.804842   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:33.804848   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:33.804921   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:33.838455   73900 cri.go:89] found id: ""
	I0930 21:10:33.838484   73900 logs.go:276] 0 containers: []
	W0930 21:10:33.838495   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:33.838505   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:33.838523   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:33.879785   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:33.879812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:33.934586   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:33.934623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:33.948250   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:33.948293   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:34.023021   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:34.023054   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:34.023069   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:36.604173   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:36.616668   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:36.616735   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:36.650716   73900 cri.go:89] found id: ""
	I0930 21:10:36.650748   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.650757   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:36.650767   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:36.650833   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:36.685705   73900 cri.go:89] found id: ""
	I0930 21:10:36.685739   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.685751   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:36.685758   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:36.685819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:36.719895   73900 cri.go:89] found id: ""
	I0930 21:10:36.719922   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.719932   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:36.719939   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:36.720006   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:36.753123   73900 cri.go:89] found id: ""
	I0930 21:10:36.753148   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.753159   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:36.753166   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:36.753231   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:36.790023   73900 cri.go:89] found id: ""
	I0930 21:10:36.790054   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.790066   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:36.790073   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:36.790135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:36.825280   73900 cri.go:89] found id: ""
	I0930 21:10:36.825314   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.825324   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:36.825343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:36.825411   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:36.859028   73900 cri.go:89] found id: ""
	I0930 21:10:36.859053   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.859060   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:36.859066   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:36.859125   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:36.894952   73900 cri.go:89] found id: ""
	I0930 21:10:36.894980   73900 logs.go:276] 0 containers: []
	W0930 21:10:36.894988   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:36.894996   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:36.895010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:36.968214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:36.968241   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:36.968256   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:37.047866   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:37.047903   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:37.088671   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:37.088705   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:37.144014   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:37.144058   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:36.068920   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:38.069544   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:36.969475   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:39.469207   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:35.808654   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:38.306701   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:39.657874   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:39.671042   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:39.671100   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:39.706210   73900 cri.go:89] found id: ""
	I0930 21:10:39.706235   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.706243   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:39.706248   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:39.706295   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:39.743194   73900 cri.go:89] found id: ""
	I0930 21:10:39.743218   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.743226   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:39.743232   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:39.743280   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:39.780681   73900 cri.go:89] found id: ""
	I0930 21:10:39.780707   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.780715   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:39.780720   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:39.780774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:39.815841   73900 cri.go:89] found id: ""
	I0930 21:10:39.815865   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.815874   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:39.815879   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:39.815933   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:39.849497   73900 cri.go:89] found id: ""
	I0930 21:10:39.849523   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.849534   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:39.849541   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:39.849603   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:39.883476   73900 cri.go:89] found id: ""
	I0930 21:10:39.883507   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.883519   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:39.883562   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:39.883633   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:39.918300   73900 cri.go:89] found id: ""
	I0930 21:10:39.918329   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.918338   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:39.918343   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:39.918392   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:39.955751   73900 cri.go:89] found id: ""
	I0930 21:10:39.955780   73900 logs.go:276] 0 containers: []
	W0930 21:10:39.955788   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:39.955795   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:39.955807   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:40.010994   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:40.011035   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:40.025992   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:40.026022   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:40.097709   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:40.097731   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:40.097748   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:40.176790   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:40.176824   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:42.713838   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:42.729806   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:42.729885   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:40.070503   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.568444   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:41.968357   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:44.469223   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:40.308072   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.807489   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:42.765449   73900 cri.go:89] found id: ""
	I0930 21:10:42.765483   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.765491   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:42.765498   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:42.765555   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:42.802556   73900 cri.go:89] found id: ""
	I0930 21:10:42.802584   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.802604   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:42.802612   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:42.802693   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:42.836537   73900 cri.go:89] found id: ""
	I0930 21:10:42.836568   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.836585   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:42.836598   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:42.836662   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:42.870475   73900 cri.go:89] found id: ""
	I0930 21:10:42.870503   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.870511   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:42.870526   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:42.870589   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:42.907061   73900 cri.go:89] found id: ""
	I0930 21:10:42.907090   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.907098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:42.907103   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:42.907153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:42.941607   73900 cri.go:89] found id: ""
	I0930 21:10:42.941632   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.941640   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:42.941646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:42.941701   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:42.977073   73900 cri.go:89] found id: ""
	I0930 21:10:42.977097   73900 logs.go:276] 0 containers: []
	W0930 21:10:42.977105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:42.977111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:42.977159   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:43.010838   73900 cri.go:89] found id: ""
	I0930 21:10:43.010859   73900 logs.go:276] 0 containers: []
	W0930 21:10:43.010867   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:43.010875   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:43.010886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:43.061264   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:43.061299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:43.075917   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:43.075950   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:43.137088   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:43.137111   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:43.137126   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:43.219393   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:43.219440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:45.761752   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:45.775864   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:45.775942   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:45.810693   73900 cri.go:89] found id: ""
	I0930 21:10:45.810724   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.810734   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:45.810740   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:45.810797   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:45.848360   73900 cri.go:89] found id: ""
	I0930 21:10:45.848399   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.848410   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:45.848418   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:45.848475   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:45.885504   73900 cri.go:89] found id: ""
	I0930 21:10:45.885550   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.885560   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:45.885565   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:45.885616   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:45.919747   73900 cri.go:89] found id: ""
	I0930 21:10:45.919776   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.919784   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:45.919789   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:45.919843   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:45.953787   73900 cri.go:89] found id: ""
	I0930 21:10:45.953820   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.953831   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:45.953839   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:45.953893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:45.990145   73900 cri.go:89] found id: ""
	I0930 21:10:45.990174   73900 logs.go:276] 0 containers: []
	W0930 21:10:45.990184   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:45.990192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:45.990253   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:46.023359   73900 cri.go:89] found id: ""
	I0930 21:10:46.023383   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.023391   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:46.023396   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:46.023447   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:46.057460   73900 cri.go:89] found id: ""
	I0930 21:10:46.057493   73900 logs.go:276] 0 containers: []
	W0930 21:10:46.057504   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:46.057514   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:46.057533   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:46.097082   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:46.097109   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:46.147921   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:46.147960   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:46.161204   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:46.161232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:46.224308   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:46.224336   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:46.224351   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:44.568918   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:46.569353   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.569656   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:46.967674   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.967998   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:45.306917   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:47.806333   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:49.807846   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:48.805668   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:48.818569   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:48.818663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:48.856783   73900 cri.go:89] found id: ""
	I0930 21:10:48.856815   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.856827   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:48.856834   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:48.856896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:48.889185   73900 cri.go:89] found id: ""
	I0930 21:10:48.889217   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.889229   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:48.889236   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:48.889306   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:48.922013   73900 cri.go:89] found id: ""
	I0930 21:10:48.922041   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.922050   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:48.922055   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:48.922107   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:48.956818   73900 cri.go:89] found id: ""
	I0930 21:10:48.956848   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.956858   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:48.956866   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:48.956929   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:48.994942   73900 cri.go:89] found id: ""
	I0930 21:10:48.994975   73900 logs.go:276] 0 containers: []
	W0930 21:10:48.994985   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:48.994991   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:48.995052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:49.031448   73900 cri.go:89] found id: ""
	I0930 21:10:49.031479   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.031491   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:49.031500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:49.031583   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:49.066570   73900 cri.go:89] found id: ""
	I0930 21:10:49.066600   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.066608   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:49.066613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:49.066658   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:49.100952   73900 cri.go:89] found id: ""
	I0930 21:10:49.100981   73900 logs.go:276] 0 containers: []
	W0930 21:10:49.100992   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:49.101000   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:49.101010   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:49.176423   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:49.176458   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:49.212358   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:49.212387   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:49.263177   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:49.263227   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:49.275940   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:49.275969   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:49.346915   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:51.847761   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:51.860571   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:51.860646   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:51.894863   73900 cri.go:89] found id: ""
	I0930 21:10:51.894896   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.894906   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:51.894914   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:51.894978   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:51.927977   73900 cri.go:89] found id: ""
	I0930 21:10:51.928007   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.928018   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:51.928025   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:51.928083   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:51.962894   73900 cri.go:89] found id: ""
	I0930 21:10:51.962924   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.962933   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:51.962940   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:51.962999   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:51.998453   73900 cri.go:89] found id: ""
	I0930 21:10:51.998482   73900 logs.go:276] 0 containers: []
	W0930 21:10:51.998493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:51.998500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:51.998562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:52.033039   73900 cri.go:89] found id: ""
	I0930 21:10:52.033066   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.033075   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:52.033080   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:52.033139   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:52.067222   73900 cri.go:89] found id: ""
	I0930 21:10:52.067254   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.067267   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:52.067274   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:52.067341   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:52.102414   73900 cri.go:89] found id: ""
	I0930 21:10:52.102439   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.102448   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:52.102453   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:52.102498   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:52.135175   73900 cri.go:89] found id: ""
	I0930 21:10:52.135204   73900 logs.go:276] 0 containers: []
	W0930 21:10:52.135214   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:52.135225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:52.135239   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:52.185736   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:52.185779   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:52.198756   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:52.198792   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:52.264816   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:52.264847   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:52.264859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:52.347189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:52.347229   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:50.569765   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:53.068745   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:50.968885   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:52.970855   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:52.307245   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:54.308516   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:54.887502   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:54.900067   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:54.900153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:54.939214   73900 cri.go:89] found id: ""
	I0930 21:10:54.939241   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.939249   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:54.939259   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:54.939313   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:54.973451   73900 cri.go:89] found id: ""
	I0930 21:10:54.973475   73900 logs.go:276] 0 containers: []
	W0930 21:10:54.973483   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:54.973488   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:54.973541   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:55.007815   73900 cri.go:89] found id: ""
	I0930 21:10:55.007841   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.007850   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:55.007855   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:55.007914   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:55.040861   73900 cri.go:89] found id: ""
	I0930 21:10:55.040891   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.040899   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:55.040905   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:55.040957   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:55.076053   73900 cri.go:89] found id: ""
	I0930 21:10:55.076086   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.076098   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:55.076111   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:55.076172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:55.108768   73900 cri.go:89] found id: ""
	I0930 21:10:55.108797   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.108807   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:55.108814   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:55.108879   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:55.155283   73900 cri.go:89] found id: ""
	I0930 21:10:55.155316   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.155331   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:55.155338   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:55.155398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:55.189370   73900 cri.go:89] found id: ""
	I0930 21:10:55.189399   73900 logs.go:276] 0 containers: []
	W0930 21:10:55.189408   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:55.189416   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:55.189432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:55.243067   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:55.243101   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:10:55.257021   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:55.257051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:55.329381   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:55.329408   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:55.329423   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:55.405691   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:55.405762   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:55.069901   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.568914   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:55.468489   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.977733   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:56.806381   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:58.806880   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:10:57.957380   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:10:57.971160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:10:57.971245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:10:58.004401   73900 cri.go:89] found id: ""
	I0930 21:10:58.004446   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.004457   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:10:58.004465   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:10:58.004524   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:10:58.038954   73900 cri.go:89] found id: ""
	I0930 21:10:58.038978   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.038986   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:10:58.038991   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:10:58.039036   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:10:58.072801   73900 cri.go:89] found id: ""
	I0930 21:10:58.072830   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.072842   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:10:58.072849   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:10:58.072909   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:10:58.104908   73900 cri.go:89] found id: ""
	I0930 21:10:58.104936   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.104946   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:10:58.104953   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:10:58.105014   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:10:58.139693   73900 cri.go:89] found id: ""
	I0930 21:10:58.139725   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.139735   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:10:58.139741   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:10:58.139795   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:10:58.174149   73900 cri.go:89] found id: ""
	I0930 21:10:58.174180   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.174192   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:10:58.174199   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:10:58.174275   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:10:58.206067   73900 cri.go:89] found id: ""
	I0930 21:10:58.206094   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.206105   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:10:58.206112   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:10:58.206167   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:10:58.240613   73900 cri.go:89] found id: ""
	I0930 21:10:58.240645   73900 logs.go:276] 0 containers: []
	W0930 21:10:58.240653   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:10:58.240661   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:10:58.240674   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:10:58.306061   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:10:58.306086   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:10:58.306100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:10:58.386030   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:10:58.386073   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:10:58.425526   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:10:58.425562   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:10:58.483364   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:10:58.483409   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:00.998086   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:01.011934   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:01.012015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:01.047923   73900 cri.go:89] found id: ""
	I0930 21:11:01.047951   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.047960   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:01.047966   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:01.048024   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:01.082126   73900 cri.go:89] found id: ""
	I0930 21:11:01.082159   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.082170   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:01.082176   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:01.082224   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:01.117746   73900 cri.go:89] found id: ""
	I0930 21:11:01.117775   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.117787   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:01.117794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:01.117853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:01.153034   73900 cri.go:89] found id: ""
	I0930 21:11:01.153059   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.153067   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:01.153072   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:01.153128   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:01.188102   73900 cri.go:89] found id: ""
	I0930 21:11:01.188125   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.188133   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:01.188139   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:01.188193   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:01.222120   73900 cri.go:89] found id: ""
	I0930 21:11:01.222147   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.222155   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:01.222161   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:01.222215   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:01.258899   73900 cri.go:89] found id: ""
	I0930 21:11:01.258929   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.258941   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:01.258949   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:01.259008   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:01.295473   73900 cri.go:89] found id: ""
	I0930 21:11:01.295504   73900 logs.go:276] 0 containers: []
	W0930 21:11:01.295512   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:01.295521   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:01.295551   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:01.349134   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:01.349181   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:01.363113   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:01.363147   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:01.436589   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:01.436609   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:01.436622   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:01.516384   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:01.516420   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:00.069406   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:02.568203   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:00.468104   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:02.968911   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:00.807318   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:03.307184   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:04.075114   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:04.089300   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:04.089375   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:04.124385   73900 cri.go:89] found id: ""
	I0930 21:11:04.124411   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.124419   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:04.124425   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:04.124491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:04.158326   73900 cri.go:89] found id: ""
	I0930 21:11:04.158359   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.158367   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:04.158372   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:04.158419   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:04.193477   73900 cri.go:89] found id: ""
	I0930 21:11:04.193507   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.193516   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:04.193521   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:04.193577   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:04.231697   73900 cri.go:89] found id: ""
	I0930 21:11:04.231723   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.231731   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:04.231737   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:04.231805   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:04.265879   73900 cri.go:89] found id: ""
	I0930 21:11:04.265903   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.265910   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:04.265915   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:04.265960   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:04.301382   73900 cri.go:89] found id: ""
	I0930 21:11:04.301421   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.301432   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:04.301440   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:04.301505   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:04.337496   73900 cri.go:89] found id: ""
	I0930 21:11:04.337521   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.337529   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:04.337534   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:04.337584   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:04.372631   73900 cri.go:89] found id: ""
	I0930 21:11:04.372665   73900 logs.go:276] 0 containers: []
	W0930 21:11:04.372677   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:04.372700   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:04.372715   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:04.385279   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:04.385311   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:04.456700   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:04.456721   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:04.456732   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:04.537892   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:04.537933   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.574919   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:04.574947   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.128733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:07.142625   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:07.142687   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:07.177450   73900 cri.go:89] found id: ""
	I0930 21:11:07.177475   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.177483   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:07.177488   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:07.177536   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:07.210158   73900 cri.go:89] found id: ""
	I0930 21:11:07.210184   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.210192   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:07.210197   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:07.210256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:07.242623   73900 cri.go:89] found id: ""
	I0930 21:11:07.242648   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.242656   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:07.242661   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:07.242705   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:07.277779   73900 cri.go:89] found id: ""
	I0930 21:11:07.277810   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.277821   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:07.277827   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:07.277881   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:07.316232   73900 cri.go:89] found id: ""
	I0930 21:11:07.316257   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.316263   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:07.316269   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:07.316326   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:07.360277   73900 cri.go:89] found id: ""
	I0930 21:11:07.360311   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.360322   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:07.360329   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:07.360391   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:07.412146   73900 cri.go:89] found id: ""
	I0930 21:11:07.412171   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.412181   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:07.412187   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:07.412247   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:07.447179   73900 cri.go:89] found id: ""
	I0930 21:11:07.447209   73900 logs.go:276] 0 containers: []
	W0930 21:11:07.447217   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:07.447225   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:07.447235   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:07.496304   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:07.496340   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:07.510332   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:07.510373   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:07.581335   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:07.581375   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:07.581393   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:07.664522   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:07.664558   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:04.568787   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.069201   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:09.070583   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:05.468251   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.970913   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:05.308084   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:07.807712   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.201145   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:10.213605   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:10.213663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:10.247875   73900 cri.go:89] found id: ""
	I0930 21:11:10.247904   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.247913   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:10.247918   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:10.247966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:10.280855   73900 cri.go:89] found id: ""
	I0930 21:11:10.280889   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.280900   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:10.280907   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:10.280967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:10.315638   73900 cri.go:89] found id: ""
	I0930 21:11:10.315661   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.315669   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:10.315675   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:10.315722   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:10.357059   73900 cri.go:89] found id: ""
	I0930 21:11:10.357086   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.357094   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:10.357100   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:10.357154   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:10.389969   73900 cri.go:89] found id: ""
	I0930 21:11:10.389997   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.390004   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:10.390009   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:10.390060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:10.424424   73900 cri.go:89] found id: ""
	I0930 21:11:10.424454   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.424463   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:10.424469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:10.424533   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:10.457608   73900 cri.go:89] found id: ""
	I0930 21:11:10.457638   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.457650   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:10.457657   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:10.457712   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:10.490215   73900 cri.go:89] found id: ""
	I0930 21:11:10.490244   73900 logs.go:276] 0 containers: []
	W0930 21:11:10.490253   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:10.490263   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:10.490278   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:10.554787   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:10.554814   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:10.554829   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:10.632428   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:10.632464   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:10.671018   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:10.671054   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:10.721187   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:10.721228   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:11.568643   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:13.568765   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.469296   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:12.968274   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:10.307487   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:12.307960   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:14.808087   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:13.234687   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:13.250680   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:13.250778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:13.312468   73900 cri.go:89] found id: ""
	I0930 21:11:13.312499   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.312509   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:13.312516   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:13.312578   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:13.367051   73900 cri.go:89] found id: ""
	I0930 21:11:13.367073   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.367084   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:13.367091   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:13.367149   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:13.403019   73900 cri.go:89] found id: ""
	I0930 21:11:13.403055   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.403066   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:13.403074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:13.403135   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:13.436942   73900 cri.go:89] found id: ""
	I0930 21:11:13.436967   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.436975   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:13.436981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:13.437047   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:13.470491   73900 cri.go:89] found id: ""
	I0930 21:11:13.470515   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.470523   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:13.470528   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:13.470619   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:13.504078   73900 cri.go:89] found id: ""
	I0930 21:11:13.504112   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.504121   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:13.504127   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:13.504201   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:13.536245   73900 cri.go:89] found id: ""
	I0930 21:11:13.536271   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.536292   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:13.536297   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:13.536357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:13.570794   73900 cri.go:89] found id: ""
	I0930 21:11:13.570817   73900 logs.go:276] 0 containers: []
	W0930 21:11:13.570827   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:13.570836   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:13.570850   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:13.647919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:13.647941   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:13.647956   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:13.726113   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:13.726150   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:13.767916   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:13.767942   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:13.826362   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:13.826402   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.341252   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:16.354259   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:16.354344   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:16.388627   73900 cri.go:89] found id: ""
	I0930 21:11:16.388650   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.388658   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:16.388663   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:16.388714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:16.424848   73900 cri.go:89] found id: ""
	I0930 21:11:16.424871   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.424878   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:16.424883   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:16.424941   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:16.460604   73900 cri.go:89] found id: ""
	I0930 21:11:16.460626   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.460635   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:16.460640   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:16.460688   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:16.495908   73900 cri.go:89] found id: ""
	I0930 21:11:16.495932   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.495940   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:16.495946   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:16.496000   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:16.531758   73900 cri.go:89] found id: ""
	I0930 21:11:16.531782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.531790   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:16.531796   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:16.531853   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:16.566756   73900 cri.go:89] found id: ""
	I0930 21:11:16.566782   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.566792   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:16.566799   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:16.566864   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:16.601978   73900 cri.go:89] found id: ""
	I0930 21:11:16.602005   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.602012   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:16.602022   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:16.602081   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:16.636009   73900 cri.go:89] found id: ""
	I0930 21:11:16.636044   73900 logs.go:276] 0 containers: []
	W0930 21:11:16.636056   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:16.636066   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:16.636079   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:16.688750   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:16.688786   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:16.702364   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:16.702404   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:16.767119   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:16.767175   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:16.767188   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:16.842052   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:16.842095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:15.571440   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:18.068441   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:15.469030   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:17.970779   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:17.307424   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:19.807193   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:19.380570   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:19.394687   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:19.394816   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:19.427087   73900 cri.go:89] found id: ""
	I0930 21:11:19.427116   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.427124   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:19.427129   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:19.427178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:19.461074   73900 cri.go:89] found id: ""
	I0930 21:11:19.461098   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.461108   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:19.461122   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:19.461183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:19.494850   73900 cri.go:89] found id: ""
	I0930 21:11:19.494872   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.494880   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:19.494885   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:19.494943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:19.533448   73900 cri.go:89] found id: ""
	I0930 21:11:19.533480   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.533493   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:19.533500   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:19.533562   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:19.569250   73900 cri.go:89] found id: ""
	I0930 21:11:19.569280   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.569291   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:19.569298   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:19.569383   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:19.603182   73900 cri.go:89] found id: ""
	I0930 21:11:19.603206   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.603213   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:19.603219   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:19.603268   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:19.637411   73900 cri.go:89] found id: ""
	I0930 21:11:19.637433   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.637441   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:19.637447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:19.637500   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:19.672789   73900 cri.go:89] found id: ""
	I0930 21:11:19.672821   73900 logs.go:276] 0 containers: []
	W0930 21:11:19.672831   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:19.672841   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:19.672854   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:19.755002   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:19.755039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:19.796499   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:19.796536   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:19.847235   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:19.847272   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:19.861007   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:19.861032   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:19.931214   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.431506   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:22.446129   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:22.446199   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:22.484093   73900 cri.go:89] found id: ""
	I0930 21:11:22.484119   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.484126   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:22.484132   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:22.484183   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:22.516949   73900 cri.go:89] found id: ""
	I0930 21:11:22.516986   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.516994   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:22.517001   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:22.517056   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:22.550848   73900 cri.go:89] found id: ""
	I0930 21:11:22.550883   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.550898   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:22.550906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:22.550966   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:22.586459   73900 cri.go:89] found id: ""
	I0930 21:11:22.586490   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.586498   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:22.586505   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:22.586627   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:22.620538   73900 cri.go:89] found id: ""
	I0930 21:11:22.620566   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.620578   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:22.620586   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:22.620651   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:22.658256   73900 cri.go:89] found id: ""
	I0930 21:11:22.658279   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.658287   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:22.658292   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:22.658352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:22.690316   73900 cri.go:89] found id: ""
	I0930 21:11:22.690349   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.690365   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:22.690371   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:22.690431   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:22.724234   73900 cri.go:89] found id: ""
	I0930 21:11:22.724264   73900 logs.go:276] 0 containers: []
	W0930 21:11:22.724275   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:22.724285   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:22.724299   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:20.570198   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:23.072974   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:20.468122   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.968686   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.307398   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:24.806972   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:22.777460   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:22.777503   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:22.790850   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:22.790879   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:22.866058   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:22.866079   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:22.866095   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:22.947447   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:22.947488   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:25.486733   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:25.499906   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:25.499976   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:25.533819   73900 cri.go:89] found id: ""
	I0930 21:11:25.533842   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.533850   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:25.533857   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:25.533906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:25.568037   73900 cri.go:89] found id: ""
	I0930 21:11:25.568059   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.568066   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:25.568071   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:25.568129   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:25.601784   73900 cri.go:89] found id: ""
	I0930 21:11:25.601811   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.601819   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:25.601824   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:25.601876   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:25.638048   73900 cri.go:89] found id: ""
	I0930 21:11:25.638070   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.638078   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:25.638084   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:25.638140   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:25.669946   73900 cri.go:89] found id: ""
	I0930 21:11:25.669968   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.669976   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:25.669981   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:25.670028   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:25.701928   73900 cri.go:89] found id: ""
	I0930 21:11:25.701953   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.701961   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:25.701967   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:25.702025   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:25.744295   73900 cri.go:89] found id: ""
	I0930 21:11:25.744327   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.744335   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:25.744341   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:25.744398   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:25.780175   73900 cri.go:89] found id: ""
	I0930 21:11:25.780205   73900 logs.go:276] 0 containers: []
	W0930 21:11:25.780213   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:25.780221   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:25.780232   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:25.828774   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:25.828812   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:25.842624   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:25.842649   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:25.916408   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:25.916451   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:25.916469   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:25.997896   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:25.997932   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:25.570148   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:28.068628   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:25.467356   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:27.467782   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:29.467936   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:27.306939   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:29.807156   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:28.540994   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:28.553841   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:28.553904   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:28.588718   73900 cri.go:89] found id: ""
	I0930 21:11:28.588745   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.588754   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:28.588763   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:28.588809   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:28.636210   73900 cri.go:89] found id: ""
	I0930 21:11:28.636237   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.636245   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:28.636250   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:28.636312   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:28.668714   73900 cri.go:89] found id: ""
	I0930 21:11:28.668743   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.668751   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:28.668757   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:28.668804   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:28.700413   73900 cri.go:89] found id: ""
	I0930 21:11:28.700449   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.700462   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:28.700469   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:28.700522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:28.733409   73900 cri.go:89] found id: ""
	I0930 21:11:28.733433   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.733441   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:28.733446   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:28.733494   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:28.766917   73900 cri.go:89] found id: ""
	I0930 21:11:28.766957   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.766970   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:28.766979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:28.767046   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:28.801759   73900 cri.go:89] found id: ""
	I0930 21:11:28.801788   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.801798   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:28.801805   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:28.801851   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:28.840724   73900 cri.go:89] found id: ""
	I0930 21:11:28.840761   73900 logs.go:276] 0 containers: []
	W0930 21:11:28.840770   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:28.840790   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:28.840805   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:28.854426   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:28.854465   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:28.926650   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:28.926675   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:28.926690   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:29.005513   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:29.005569   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:29.047077   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:29.047102   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.603193   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:31.615563   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:31.615631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:31.647656   73900 cri.go:89] found id: ""
	I0930 21:11:31.647685   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.647693   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:31.647699   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:31.647748   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:31.680004   73900 cri.go:89] found id: ""
	I0930 21:11:31.680037   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.680048   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:31.680056   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:31.680120   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:31.712562   73900 cri.go:89] found id: ""
	I0930 21:11:31.712588   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.712596   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:31.712602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:31.712650   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:31.747692   73900 cri.go:89] found id: ""
	I0930 21:11:31.747724   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.747732   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:31.747738   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:31.747803   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:31.781441   73900 cri.go:89] found id: ""
	I0930 21:11:31.781464   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.781472   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:31.781478   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:31.781532   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:31.822227   73900 cri.go:89] found id: ""
	I0930 21:11:31.822252   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.822259   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:31.822265   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:31.822322   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:31.856531   73900 cri.go:89] found id: ""
	I0930 21:11:31.856555   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.856563   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:31.856568   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:31.856631   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:31.894562   73900 cri.go:89] found id: ""
	I0930 21:11:31.894585   73900 logs.go:276] 0 containers: []
	W0930 21:11:31.894593   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:31.894602   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:31.894618   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:31.946233   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:31.946271   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:31.960713   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:31.960744   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:32.036479   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:32.036497   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:32.036509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:32.111442   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:32.111477   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:30.068975   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:32.069794   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:31.468374   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:33.468986   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:31.809169   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:34.307372   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:34.651545   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:34.664058   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:34.664121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:34.697506   73900 cri.go:89] found id: ""
	I0930 21:11:34.697530   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.697539   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:34.697545   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:34.697599   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:34.730297   73900 cri.go:89] found id: ""
	I0930 21:11:34.730326   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.730334   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:34.730339   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:34.730390   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:34.762251   73900 cri.go:89] found id: ""
	I0930 21:11:34.762278   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.762286   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:34.762291   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:34.762358   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:34.803028   73900 cri.go:89] found id: ""
	I0930 21:11:34.803058   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.803068   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:34.803074   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:34.803122   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:34.840063   73900 cri.go:89] found id: ""
	I0930 21:11:34.840097   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.840110   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:34.840118   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:34.840192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:34.878641   73900 cri.go:89] found id: ""
	I0930 21:11:34.878675   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.878686   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:34.878693   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:34.878745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:34.910799   73900 cri.go:89] found id: ""
	I0930 21:11:34.910823   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.910830   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:34.910837   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:34.910899   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:34.947748   73900 cri.go:89] found id: ""
	I0930 21:11:34.947782   73900 logs.go:276] 0 containers: []
	W0930 21:11:34.947795   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:34.947806   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:34.947821   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:35.026490   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:35.026514   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:35.026529   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:35.115504   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:35.115559   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:35.158629   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:35.158659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:35.211011   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:35.211052   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:37.726260   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:37.739137   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:37.739222   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:34.568166   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:36.569720   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:39.069371   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:35.968574   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:38.467872   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:36.807057   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:38.807376   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:37.779980   73900 cri.go:89] found id: ""
	I0930 21:11:37.780009   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.780018   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:37.780024   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:37.780076   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:37.813936   73900 cri.go:89] found id: ""
	I0930 21:11:37.813961   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.813969   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:37.813975   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:37.814021   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:37.851150   73900 cri.go:89] found id: ""
	I0930 21:11:37.851176   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.851186   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:37.851193   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:37.851256   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:37.891855   73900 cri.go:89] found id: ""
	I0930 21:11:37.891881   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.891889   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:37.891894   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:37.891943   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:37.929234   73900 cri.go:89] found id: ""
	I0930 21:11:37.929269   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.929281   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:37.929288   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:37.929359   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:37.962350   73900 cri.go:89] found id: ""
	I0930 21:11:37.962378   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.962386   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:37.962391   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:37.962441   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:37.996727   73900 cri.go:89] found id: ""
	I0930 21:11:37.996752   73900 logs.go:276] 0 containers: []
	W0930 21:11:37.996760   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:37.996765   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:37.996819   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:38.029959   73900 cri.go:89] found id: ""
	I0930 21:11:38.029991   73900 logs.go:276] 0 containers: []
	W0930 21:11:38.029999   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:38.030008   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:38.030019   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:38.079836   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:38.079875   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:38.093208   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:38.093236   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:38.168839   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:38.168862   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:38.168873   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:38.244747   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:38.244783   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:40.788841   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:40.802419   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:40.802491   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:40.837138   73900 cri.go:89] found id: ""
	I0930 21:11:40.837175   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.837186   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:40.837193   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:40.837255   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:40.870947   73900 cri.go:89] found id: ""
	I0930 21:11:40.870977   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.870987   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:40.870993   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:40.871040   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:40.905004   73900 cri.go:89] found id: ""
	I0930 21:11:40.905033   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.905046   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:40.905053   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:40.905104   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:40.936909   73900 cri.go:89] found id: ""
	I0930 21:11:40.936937   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.936945   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:40.936952   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:40.937015   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:40.972601   73900 cri.go:89] found id: ""
	I0930 21:11:40.972630   73900 logs.go:276] 0 containers: []
	W0930 21:11:40.972641   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:40.972646   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:40.972704   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:41.007539   73900 cri.go:89] found id: ""
	I0930 21:11:41.007583   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.007594   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:41.007602   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:41.007661   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:41.042049   73900 cri.go:89] found id: ""
	I0930 21:11:41.042075   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.042084   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:41.042091   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:41.042153   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:41.075313   73900 cri.go:89] found id: ""
	I0930 21:11:41.075398   73900 logs.go:276] 0 containers: []
	W0930 21:11:41.075414   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:41.075424   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:41.075440   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:41.128683   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:41.128726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:41.142533   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:41.142560   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:41.210149   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:41.210176   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:41.210191   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:41.286547   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:41.286590   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:41.070042   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.570819   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:40.969912   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.468434   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:40.808294   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.307628   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:43.828902   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:43.842047   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:43.842127   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:43.876147   73900 cri.go:89] found id: ""
	I0930 21:11:43.876177   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.876187   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:43.876194   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:43.876287   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:43.916351   73900 cri.go:89] found id: ""
	I0930 21:11:43.916383   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.916394   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:43.916404   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:43.916457   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:43.948853   73900 cri.go:89] found id: ""
	I0930 21:11:43.948883   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.948894   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:43.948900   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:43.948967   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:43.983525   73900 cri.go:89] found id: ""
	I0930 21:11:43.983577   73900 logs.go:276] 0 containers: []
	W0930 21:11:43.983589   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:43.983597   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:43.983656   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:44.021560   73900 cri.go:89] found id: ""
	I0930 21:11:44.021594   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.021606   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:44.021614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:44.021684   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:44.057307   73900 cri.go:89] found id: ""
	I0930 21:11:44.057342   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.057353   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:44.057361   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:44.057418   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:44.091120   73900 cri.go:89] found id: ""
	I0930 21:11:44.091145   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.091155   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:44.091162   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:44.091223   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:44.125781   73900 cri.go:89] found id: ""
	I0930 21:11:44.125808   73900 logs.go:276] 0 containers: []
	W0930 21:11:44.125817   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:44.125827   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:44.125842   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:44.138699   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:44.138726   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:44.208976   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:44.209009   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:44.209026   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:44.285552   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:44.285593   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:44.323412   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:44.323449   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:46.875210   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:46.888532   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:46.888596   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:46.921260   73900 cri.go:89] found id: ""
	I0930 21:11:46.921285   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.921293   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:46.921299   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:46.921357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:46.954645   73900 cri.go:89] found id: ""
	I0930 21:11:46.954675   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.954683   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:46.954688   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:46.954749   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:46.988424   73900 cri.go:89] found id: ""
	I0930 21:11:46.988457   73900 logs.go:276] 0 containers: []
	W0930 21:11:46.988468   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:46.988475   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:46.988535   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:47.022635   73900 cri.go:89] found id: ""
	I0930 21:11:47.022664   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.022675   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:47.022682   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:47.022744   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:47.056497   73900 cri.go:89] found id: ""
	I0930 21:11:47.056523   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.056530   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:47.056536   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:47.056595   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:47.094983   73900 cri.go:89] found id: ""
	I0930 21:11:47.095011   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.095021   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:47.095028   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:47.095097   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:47.147567   73900 cri.go:89] found id: ""
	I0930 21:11:47.147595   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.147606   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:47.147613   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:47.147692   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:47.184878   73900 cri.go:89] found id: ""
	I0930 21:11:47.184908   73900 logs.go:276] 0 containers: []
	W0930 21:11:47.184919   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:47.184930   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:47.184943   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:47.258581   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:47.258615   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:47.303068   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:47.303100   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:47.358749   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:47.358789   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:47.372492   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:47.372531   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:47.443984   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:46.069421   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:48.569013   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:45.968422   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:47.968876   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:45.808341   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:48.306627   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:49.944644   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:49.958045   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:49.958124   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:49.993053   73900 cri.go:89] found id: ""
	I0930 21:11:49.993088   73900 logs.go:276] 0 containers: []
	W0930 21:11:49.993100   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:49.993107   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:49.993168   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:50.026171   73900 cri.go:89] found id: ""
	I0930 21:11:50.026197   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.026205   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:50.026210   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:50.026269   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:50.060462   73900 cri.go:89] found id: ""
	I0930 21:11:50.060492   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.060502   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:50.060509   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:50.060567   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:50.095385   73900 cri.go:89] found id: ""
	I0930 21:11:50.095414   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.095425   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:50.095432   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:50.095507   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:50.127275   73900 cri.go:89] found id: ""
	I0930 21:11:50.127300   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.127308   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:50.127318   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:50.127378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:50.159810   73900 cri.go:89] found id: ""
	I0930 21:11:50.159836   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.159845   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:50.159850   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:50.159906   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:50.191651   73900 cri.go:89] found id: ""
	I0930 21:11:50.191684   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.191695   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:50.191702   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:50.191774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:50.225772   73900 cri.go:89] found id: ""
	I0930 21:11:50.225799   73900 logs.go:276] 0 containers: []
	W0930 21:11:50.225809   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:50.225819   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:50.225837   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:50.310189   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:50.310223   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:50.348934   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:50.348965   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:50.400666   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:50.400703   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:50.415810   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:50.415843   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:50.483773   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:51.069928   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:53.070065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:50.469516   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.968367   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:54.968624   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:50.307903   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.807610   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:52.984701   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:52.997669   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:52.997745   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:53.034012   73900 cri.go:89] found id: ""
	I0930 21:11:53.034044   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.034055   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:53.034063   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:53.034121   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:53.068192   73900 cri.go:89] found id: ""
	I0930 21:11:53.068215   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.068222   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:53.068228   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:53.068285   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:53.104683   73900 cri.go:89] found id: ""
	I0930 21:11:53.104710   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.104719   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:53.104724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:53.104778   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:53.138713   73900 cri.go:89] found id: ""
	I0930 21:11:53.138745   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.138753   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:53.138759   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:53.138814   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:53.173955   73900 cri.go:89] found id: ""
	I0930 21:11:53.173982   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.173994   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:53.174001   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:53.174060   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:53.205942   73900 cri.go:89] found id: ""
	I0930 21:11:53.205970   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.205980   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:53.205987   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:53.206052   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:53.241739   73900 cri.go:89] found id: ""
	I0930 21:11:53.241767   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.241776   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:53.241782   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:53.241832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:53.275328   73900 cri.go:89] found id: ""
	I0930 21:11:53.275363   73900 logs.go:276] 0 containers: []
	W0930 21:11:53.275372   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:53.275381   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:53.275397   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:53.313732   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:53.313761   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:53.364974   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:53.365011   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:53.377970   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:53.377999   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:53.445341   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:53.445370   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:53.445388   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.025958   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:56.038367   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:56.038434   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:56.074721   73900 cri.go:89] found id: ""
	I0930 21:11:56.074756   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.074767   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:56.074781   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:56.074846   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:56.111491   73900 cri.go:89] found id: ""
	I0930 21:11:56.111525   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.111550   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:56.111572   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:56.111626   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:56.145660   73900 cri.go:89] found id: ""
	I0930 21:11:56.145690   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.145701   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:56.145708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:56.145769   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:56.180865   73900 cri.go:89] found id: ""
	I0930 21:11:56.180891   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.180901   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:56.180908   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:56.180971   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:56.213681   73900 cri.go:89] found id: ""
	I0930 21:11:56.213707   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.213716   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:56.213721   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:56.213772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:56.246683   73900 cri.go:89] found id: ""
	I0930 21:11:56.246711   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.246719   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:56.246724   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:56.246774   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:56.279651   73900 cri.go:89] found id: ""
	I0930 21:11:56.279679   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.279687   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:56.279692   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:56.279746   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:56.316701   73900 cri.go:89] found id: ""
	I0930 21:11:56.316727   73900 logs.go:276] 0 containers: []
	W0930 21:11:56.316735   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:56.316743   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:56.316753   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:56.329879   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:56.329905   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:56.399919   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:56.399949   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:56.399964   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:56.480200   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:56.480237   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:56.517755   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:56.517782   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:11:55.568782   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:58.068718   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:57.468492   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.968123   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:55.307809   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:57.308095   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.807355   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:11:59.070677   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:11:59.085884   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:11:59.085956   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:11:59.119580   73900 cri.go:89] found id: ""
	I0930 21:11:59.119606   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.119615   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:11:59.119621   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:11:59.119667   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:11:59.152087   73900 cri.go:89] found id: ""
	I0930 21:11:59.152111   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.152120   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:11:59.152127   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:11:59.152172   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:11:59.186177   73900 cri.go:89] found id: ""
	I0930 21:11:59.186205   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.186213   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:11:59.186220   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:11:59.186276   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:11:59.218800   73900 cri.go:89] found id: ""
	I0930 21:11:59.218821   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.218829   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:11:59.218835   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:11:59.218893   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:11:59.254335   73900 cri.go:89] found id: ""
	I0930 21:11:59.254361   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.254372   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:11:59.254378   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:11:59.254432   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:11:59.292406   73900 cri.go:89] found id: ""
	I0930 21:11:59.292441   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.292453   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:11:59.292460   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:11:59.292522   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:11:59.333352   73900 cri.go:89] found id: ""
	I0930 21:11:59.333388   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.333399   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:11:59.333406   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:11:59.333481   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:11:59.377031   73900 cri.go:89] found id: ""
	I0930 21:11:59.377056   73900 logs.go:276] 0 containers: []
	W0930 21:11:59.377064   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:11:59.377072   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:11:59.377084   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:11:59.392626   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:11:59.392655   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:11:59.473714   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:11:59.473741   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:11:59.473754   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:11:59.548895   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:11:59.548931   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:11:59.589007   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:11:59.589039   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.139243   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:02.152335   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:02.152415   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:02.186942   73900 cri.go:89] found id: ""
	I0930 21:12:02.186980   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.186991   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:02.186999   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:02.187061   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:02.219738   73900 cri.go:89] found id: ""
	I0930 21:12:02.219759   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.219768   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:02.219773   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:02.219820   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:02.253667   73900 cri.go:89] found id: ""
	I0930 21:12:02.253698   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.253707   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:02.253712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:02.253760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:02.290078   73900 cri.go:89] found id: ""
	I0930 21:12:02.290105   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.290115   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:02.290122   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:02.290182   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:02.326408   73900 cri.go:89] found id: ""
	I0930 21:12:02.326436   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.326448   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:02.326455   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:02.326509   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:02.360608   73900 cri.go:89] found id: ""
	I0930 21:12:02.360641   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.360649   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:02.360655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:02.360714   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:02.396140   73900 cri.go:89] found id: ""
	I0930 21:12:02.396166   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.396176   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:02.396182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:02.396236   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:02.429905   73900 cri.go:89] found id: ""
	I0930 21:12:02.429947   73900 logs.go:276] 0 containers: []
	W0930 21:12:02.429958   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:02.429968   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:02.429986   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:02.506600   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:02.506645   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:02.549325   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:02.549354   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:02.603614   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:02.603659   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:02.618832   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:02.618859   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:02.692491   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:00.070569   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:02.569436   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:01.968240   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:04.468583   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:02.306973   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:04.308182   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:05.193131   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:05.206133   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:05.206192   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:05.238403   73900 cri.go:89] found id: ""
	I0930 21:12:05.238431   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.238439   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:05.238447   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:05.238523   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:05.271261   73900 cri.go:89] found id: ""
	I0930 21:12:05.271290   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.271303   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:05.271310   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:05.271378   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:05.307718   73900 cri.go:89] found id: ""
	I0930 21:12:05.307749   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.307760   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:05.307767   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:05.307832   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:05.341336   73900 cri.go:89] found id: ""
	I0930 21:12:05.341379   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.341390   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:05.341398   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:05.341461   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:05.374998   73900 cri.go:89] found id: ""
	I0930 21:12:05.375024   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.375032   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:05.375037   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:05.375085   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:05.410133   73900 cri.go:89] found id: ""
	I0930 21:12:05.410163   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.410174   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:05.410182   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:05.410248   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:05.446197   73900 cri.go:89] found id: ""
	I0930 21:12:05.446227   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.446238   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:05.446246   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:05.446305   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:05.480638   73900 cri.go:89] found id: ""
	I0930 21:12:05.480667   73900 logs.go:276] 0 containers: []
	W0930 21:12:05.480683   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:05.480691   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:05.480702   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:05.532473   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:05.532512   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:05.547068   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:05.547096   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:05.621444   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:05.621472   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:05.621487   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:05.707712   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:05.707767   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:05.068363   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:07.069531   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:06.969695   73375 pod_ready.go:103] pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:06.969727   73375 pod_ready.go:82] duration metric: took 4m0.008001407s for pod "metrics-server-6867b74b74-c2wpn" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:06.969736   73375 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:06.969743   73375 pod_ready.go:39] duration metric: took 4m4.053054405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:06.969757   73375 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:12:06.969781   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:06.969835   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:07.024708   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:07.024730   73375 cri.go:89] found id: ""
	I0930 21:12:07.024737   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:07.024805   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.029375   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:07.029439   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:07.063656   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:07.063684   73375 cri.go:89] found id: ""
	I0930 21:12:07.063695   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:07.063754   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.068071   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:07.068126   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:07.102636   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:07.102665   73375 cri.go:89] found id: ""
	I0930 21:12:07.102675   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:07.102733   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.106711   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:07.106791   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:07.142676   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:07.142698   73375 cri.go:89] found id: ""
	I0930 21:12:07.142708   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:07.142766   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.146979   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:07.147041   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:07.189192   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:07.189223   73375 cri.go:89] found id: ""
	I0930 21:12:07.189232   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:07.189283   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.193408   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:07.193484   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:07.230538   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:07.230562   73375 cri.go:89] found id: ""
	I0930 21:12:07.230571   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:07.230630   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.235482   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:07.235573   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:07.274180   73375 cri.go:89] found id: ""
	I0930 21:12:07.274215   73375 logs.go:276] 0 containers: []
	W0930 21:12:07.274226   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:07.274233   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:07.274312   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:07.312851   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:07.312876   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:07.312882   73375 cri.go:89] found id: ""
	I0930 21:12:07.312890   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:07.312947   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.317386   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:07.321912   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:07.321940   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:07.361674   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:07.361701   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:07.398555   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:07.398615   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:07.432511   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:07.432540   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:07.919639   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:07.919678   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:07.935038   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:07.935067   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:08.059404   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:08.059435   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:08.114569   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:08.114605   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:08.153409   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:08.153447   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.193155   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:08.193187   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:08.260774   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:08.260814   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:08.351488   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:08.351519   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:08.387971   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:08.388012   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:06.805971   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:08.807886   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:08.248038   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:08.261409   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:08.261485   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:08.305564   73900 cri.go:89] found id: ""
	I0930 21:12:08.305591   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.305601   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:08.305610   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:08.305669   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:08.347816   73900 cri.go:89] found id: ""
	I0930 21:12:08.347844   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.347852   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:08.347858   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:08.347927   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:08.381662   73900 cri.go:89] found id: ""
	I0930 21:12:08.381695   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.381705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:08.381712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:08.381829   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:08.427366   73900 cri.go:89] found id: ""
	I0930 21:12:08.427396   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.427406   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:08.427413   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:08.427476   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:08.463419   73900 cri.go:89] found id: ""
	I0930 21:12:08.463443   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.463451   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:08.463457   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:08.463508   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:08.496999   73900 cri.go:89] found id: ""
	I0930 21:12:08.497023   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.497033   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:08.497040   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:08.497098   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:08.530410   73900 cri.go:89] found id: ""
	I0930 21:12:08.530434   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.530442   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:08.530447   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:08.530495   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:08.563191   73900 cri.go:89] found id: ""
	I0930 21:12:08.563224   73900 logs.go:276] 0 containers: []
	W0930 21:12:08.563235   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:08.563244   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:08.563258   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:08.640305   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:08.640341   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:08.676404   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:08.676431   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:08.729676   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:08.729736   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:08.743282   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:08.743310   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:08.811334   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.311643   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:11.329153   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:11.329229   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:11.369804   73900 cri.go:89] found id: ""
	I0930 21:12:11.369829   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.369838   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:11.369843   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:11.369896   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:11.408530   73900 cri.go:89] found id: ""
	I0930 21:12:11.408558   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.408569   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:11.408580   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:11.408663   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.446123   73900 cri.go:89] found id: ""
	I0930 21:12:11.446147   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.446155   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:11.446160   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:11.446206   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:11.484019   73900 cri.go:89] found id: ""
	I0930 21:12:11.484044   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.484052   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:11.484057   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:11.484118   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:11.521934   73900 cri.go:89] found id: ""
	I0930 21:12:11.521961   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.521971   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:11.521979   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:11.522042   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:11.561253   73900 cri.go:89] found id: ""
	I0930 21:12:11.561283   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.561293   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:11.561299   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:11.561352   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:11.602610   73900 cri.go:89] found id: ""
	I0930 21:12:11.602637   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.602648   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:11.602655   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:11.602760   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:11.637146   73900 cri.go:89] found id: ""
	I0930 21:12:11.637174   73900 logs.go:276] 0 containers: []
	W0930 21:12:11.637185   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:11.637194   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:11.637208   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:11.707627   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:11.707651   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:11.707668   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:11.786047   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:11.786091   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:11.827128   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:11.827157   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:11.885504   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:11.885542   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:09.569584   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:11.570031   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:14.068184   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:10.950921   73375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:10.967834   73375 api_server.go:72] duration metric: took 4m15.348038807s to wait for apiserver process to appear ...
	I0930 21:12:10.967876   73375 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:12:10.967922   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:10.967990   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:11.006632   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:11.006667   73375 cri.go:89] found id: ""
	I0930 21:12:11.006677   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:11.006738   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.010931   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:11.010994   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:11.045855   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:11.045882   73375 cri.go:89] found id: ""
	I0930 21:12:11.045893   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:11.045953   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.050058   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:11.050134   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.090954   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:11.090980   73375 cri.go:89] found id: ""
	I0930 21:12:11.090990   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:11.091041   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.095073   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:11.095150   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:11.137413   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:11.137448   73375 cri.go:89] found id: ""
	I0930 21:12:11.137458   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:11.137516   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.141559   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:11.141638   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:11.176921   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:11.176952   73375 cri.go:89] found id: ""
	I0930 21:12:11.176961   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:11.177010   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.181095   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:11.181158   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:11.215117   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:11.215141   73375 cri.go:89] found id: ""
	I0930 21:12:11.215148   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:11.215195   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.218947   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:11.219003   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:11.253901   73375 cri.go:89] found id: ""
	I0930 21:12:11.253937   73375 logs.go:276] 0 containers: []
	W0930 21:12:11.253948   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:11.253955   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:11.254010   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:11.293408   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:11.293434   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:11.293440   73375 cri.go:89] found id: ""
	I0930 21:12:11.293448   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:11.293562   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.297829   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:11.302572   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:11.302596   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:11.378000   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:11.378037   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:11.415382   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:11.415414   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:11.453703   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:11.453729   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:11.517749   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:11.517780   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:11.556543   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:11.556576   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:12.023270   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:12.023310   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:12.071138   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:12.071170   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:12.086915   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:12.086944   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:12.200046   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:12.200077   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:12.241447   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:12.241475   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:12.296574   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:12.296607   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:12.341982   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:12.342009   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:14.877590   73375 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0930 21:12:14.882913   73375 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0930 21:12:14.884088   73375 api_server.go:141] control plane version: v1.31.1
	I0930 21:12:14.884106   73375 api_server.go:131] duration metric: took 3.916223308s to wait for apiserver health ...
	I0930 21:12:14.884113   73375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:12:14.884134   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:14.884185   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:14.926932   73375 cri.go:89] found id: "249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:14.926952   73375 cri.go:89] found id: ""
	I0930 21:12:14.926960   73375 logs.go:276] 1 containers: [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122]
	I0930 21:12:14.927003   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:14.931044   73375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:14.931106   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:14.967622   73375 cri.go:89] found id: "e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:14.967645   73375 cri.go:89] found id: ""
	I0930 21:12:14.967652   73375 logs.go:276] 1 containers: [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c]
	I0930 21:12:14.967698   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:14.972152   73375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:14.972221   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:11.307501   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:13.307687   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:14.400848   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:14.413794   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:14.413882   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:14.449799   73900 cri.go:89] found id: ""
	I0930 21:12:14.449830   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.449841   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:14.449849   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:14.449902   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:14.486301   73900 cri.go:89] found id: ""
	I0930 21:12:14.486330   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.486357   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:14.486365   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:14.486427   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:14.520451   73900 cri.go:89] found id: ""
	I0930 21:12:14.520479   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.520487   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:14.520497   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:14.520558   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:14.554056   73900 cri.go:89] found id: ""
	I0930 21:12:14.554095   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.554107   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:14.554114   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:14.554178   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:14.594054   73900 cri.go:89] found id: ""
	I0930 21:12:14.594080   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.594088   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:14.594094   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:14.594142   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:14.630225   73900 cri.go:89] found id: ""
	I0930 21:12:14.630255   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.630278   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:14.630284   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:14.630335   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:14.663006   73900 cri.go:89] found id: ""
	I0930 21:12:14.663043   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.663054   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:14.663061   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:14.663119   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:14.699815   73900 cri.go:89] found id: ""
	I0930 21:12:14.699845   73900 logs.go:276] 0 containers: []
	W0930 21:12:14.699858   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:14.699870   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:14.699886   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:14.751465   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:14.751509   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:14.766401   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:14.766432   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:14.832979   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:14.833002   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:14.833016   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:14.918011   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:14.918051   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.458886   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:17.471833   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:17.471918   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:17.505109   73900 cri.go:89] found id: ""
	I0930 21:12:17.505135   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.505145   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:12:17.505151   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:17.505213   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:17.538091   73900 cri.go:89] found id: ""
	I0930 21:12:17.538118   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.538129   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:12:17.538136   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:17.538308   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:17.571668   73900 cri.go:89] found id: ""
	I0930 21:12:17.571694   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.571705   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:12:17.571712   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:17.571770   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:17.607391   73900 cri.go:89] found id: ""
	I0930 21:12:17.607431   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.607442   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:12:17.607452   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:17.607519   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:17.643271   73900 cri.go:89] found id: ""
	I0930 21:12:17.643297   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.643305   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:12:17.643313   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:17.643382   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:17.676653   73900 cri.go:89] found id: ""
	I0930 21:12:17.676687   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.676698   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:12:17.676708   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:17.676772   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:17.709570   73900 cri.go:89] found id: ""
	I0930 21:12:17.709602   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.709610   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:17.709615   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:12:17.709671   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:12:17.747857   73900 cri.go:89] found id: ""
	I0930 21:12:17.747883   73900 logs.go:276] 0 containers: []
	W0930 21:12:17.747891   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:12:17.747902   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:17.747915   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:15.010874   73375 cri.go:89] found id: "d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:15.010898   73375 cri.go:89] found id: ""
	I0930 21:12:15.010905   73375 logs.go:276] 1 containers: [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7]
	I0930 21:12:15.010947   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.015490   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:15.015582   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:15.051182   73375 cri.go:89] found id: "438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:15.051210   73375 cri.go:89] found id: ""
	I0930 21:12:15.051220   73375 logs.go:276] 1 containers: [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c]
	I0930 21:12:15.051291   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.055057   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:15.055107   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:15.093126   73375 cri.go:89] found id: "a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:15.093150   73375 cri.go:89] found id: ""
	I0930 21:12:15.093159   73375 logs.go:276] 1 containers: [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f]
	I0930 21:12:15.093214   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.097138   73375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:15.097200   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:15.131676   73375 cri.go:89] found id: "1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:15.131704   73375 cri.go:89] found id: ""
	I0930 21:12:15.131716   73375 logs.go:276] 1 containers: [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf]
	I0930 21:12:15.131773   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.135550   73375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:15.135620   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:15.170579   73375 cri.go:89] found id: ""
	I0930 21:12:15.170604   73375 logs.go:276] 0 containers: []
	W0930 21:12:15.170612   73375 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:15.170618   73375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:15.170672   73375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:15.205190   73375 cri.go:89] found id: "6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:15.205216   73375 cri.go:89] found id: "298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:15.205222   73375 cri.go:89] found id: ""
	I0930 21:12:15.205231   73375 logs.go:276] 2 containers: [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e]
	I0930 21:12:15.205287   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.209426   73375 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.212981   73375 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:15.213002   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:15.281543   73375 logs.go:123] Gathering logs for kube-proxy [a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f] ...
	I0930 21:12:15.281582   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ce5450390e92a8c91385e37ddf6c18d9deff4a97ab7c306e1ea91ddf41035f"
	I0930 21:12:15.325855   73375 logs.go:123] Gathering logs for container status ...
	I0930 21:12:15.325895   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:15.367382   73375 logs.go:123] Gathering logs for etcd [e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c] ...
	I0930 21:12:15.367429   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7334f6f137876b1d4664a959b183bd50e97fb948f49515c54996c430236523c"
	I0930 21:12:15.441395   73375 logs.go:123] Gathering logs for coredns [d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7] ...
	I0930 21:12:15.441432   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730f13030b2a1256ed7f64173ac90ed99d64713b37b324cc2b664707ccda6f7"
	I0930 21:12:15.482487   73375 logs.go:123] Gathering logs for kube-scheduler [438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c] ...
	I0930 21:12:15.482518   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 438729352d121b15ac37ceae8772845418eb48315781e3629317be1cf4576e8c"
	I0930 21:12:15.520298   73375 logs.go:123] Gathering logs for kube-controller-manager [1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf] ...
	I0930 21:12:15.520335   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1970803994e16d3faae7bc3e76d3618644a8f0524dab9edc8387641e206f97cf"
	I0930 21:12:15.572596   73375 logs.go:123] Gathering logs for storage-provisioner [6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55] ...
	I0930 21:12:15.572626   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dcf5ceb365ec82b4a246aabb079e3b8201b4be0bcc79de684db38301620bd55"
	I0930 21:12:15.618087   73375 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:15.618120   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:15.634125   73375 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:15.634151   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:15.744355   73375 logs.go:123] Gathering logs for kube-apiserver [249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122] ...
	I0930 21:12:15.744390   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249f183de7189578042571474011ee863995e80ae75fa1bf6c60b00a2ae12122"
	I0930 21:12:15.799312   73375 logs.go:123] Gathering logs for storage-provisioner [298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e] ...
	I0930 21:12:15.799345   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298410b231e998a921a1feef92953718a4c0235a58f4b9fe1dfdc4e625dcf18e"
	I0930 21:12:15.838934   73375 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:15.838969   73375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:18.759947   73375 system_pods.go:59] 8 kube-system pods found
	I0930 21:12:18.759976   73375 system_pods.go:61] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running
	I0930 21:12:18.759981   73375 system_pods.go:61] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running
	I0930 21:12:18.759985   73375 system_pods.go:61] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running
	I0930 21:12:18.759989   73375 system_pods.go:61] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running
	I0930 21:12:18.759992   73375 system_pods.go:61] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:12:18.759995   73375 system_pods.go:61] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running
	I0930 21:12:18.760001   73375 system_pods.go:61] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:18.760006   73375 system_pods.go:61] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:12:18.760016   73375 system_pods.go:74] duration metric: took 3.875896906s to wait for pod list to return data ...
	I0930 21:12:18.760024   73375 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:12:18.762755   73375 default_sa.go:45] found service account: "default"
	I0930 21:12:18.762777   73375 default_sa.go:55] duration metric: took 2.746721ms for default service account to be created ...
	I0930 21:12:18.762787   73375 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:12:18.769060   73375 system_pods.go:86] 8 kube-system pods found
	I0930 21:12:18.769086   73375 system_pods.go:89] "coredns-7c65d6cfc9-jg8ph" [46ba2867-485a-4b67-af4b-4de2c607d172] Running
	I0930 21:12:18.769091   73375 system_pods.go:89] "etcd-no-preload-997816" [1def50bb-1f1b-4d25-b797-38d5b782a674] Running
	I0930 21:12:18.769095   73375 system_pods.go:89] "kube-apiserver-no-preload-997816" [67313588-adcb-4d3f-ba8a-4e7a1ea5127b] Running
	I0930 21:12:18.769099   73375 system_pods.go:89] "kube-controller-manager-no-preload-997816" [b471888b-d4e6-4768-a246-f234ffcbf1c6] Running
	I0930 21:12:18.769104   73375 system_pods.go:89] "kube-proxy-klcv8" [133bcd7f-667d-4969-b063-d33e2c8eed0f] Running
	I0930 21:12:18.769107   73375 system_pods.go:89] "kube-scheduler-no-preload-997816" [130a7a05-0889-4562-afc6-bee3ba4970a1] Running
	I0930 21:12:18.769113   73375 system_pods.go:89] "metrics-server-6867b74b74-c2wpn" [2d6b7210-f9ac-4a66-9a8c-db6e23dbbd82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:18.769129   73375 system_pods.go:89] "storage-provisioner" [01617edf-b831-48d3-9002-279b64f6389c] Running
	I0930 21:12:18.769136   73375 system_pods.go:126] duration metric: took 6.344583ms to wait for k8s-apps to be running ...
	I0930 21:12:18.769144   73375 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:12:18.769183   73375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:18.785488   73375 system_svc.go:56] duration metric: took 16.335135ms WaitForService to wait for kubelet
	I0930 21:12:18.785544   73375 kubeadm.go:582] duration metric: took 4m23.165751441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:12:18.785572   73375 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:12:18.789308   73375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:12:18.789340   73375 node_conditions.go:123] node cpu capacity is 2
	I0930 21:12:18.789356   73375 node_conditions.go:105] duration metric: took 3.778609ms to run NodePressure ...
	I0930 21:12:18.789370   73375 start.go:241] waiting for startup goroutines ...
	I0930 21:12:18.789379   73375 start.go:246] waiting for cluster config update ...
	I0930 21:12:18.789394   73375 start.go:255] writing updated cluster config ...
	I0930 21:12:18.789688   73375 ssh_runner.go:195] Run: rm -f paused
	I0930 21:12:18.837384   73375 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:12:18.839699   73375 out.go:177] * Done! kubectl is now configured to use "no-preload-997816" cluster and "default" namespace by default
	I0930 21:12:16.070108   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:18.569568   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:15.308534   73707 pod_ready.go:103] pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:15.308581   73707 pod_ready.go:82] duration metric: took 4m0.007893146s for pod "metrics-server-6867b74b74-txb2j" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:15.308595   73707 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:15.308605   73707 pod_ready.go:39] duration metric: took 4m2.806797001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:15.308621   73707 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:12:15.308657   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:15.308722   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:15.353287   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:15.353348   73707 cri.go:89] found id: ""
	I0930 21:12:15.353359   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:15.353416   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.357602   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:15.357696   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:15.399289   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:15.399325   73707 cri.go:89] found id: ""
	I0930 21:12:15.399332   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:15.399377   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.404757   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:15.404832   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:15.454396   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:15.454423   73707 cri.go:89] found id: ""
	I0930 21:12:15.454433   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:15.454493   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.458660   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:15.458743   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:15.493941   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:15.493971   73707 cri.go:89] found id: ""
	I0930 21:12:15.493982   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:15.494055   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.498541   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:15.498628   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:15.535354   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:15.535385   73707 cri.go:89] found id: ""
	I0930 21:12:15.535395   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:15.535454   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.540097   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:15.540168   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:15.583969   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:15.583996   73707 cri.go:89] found id: ""
	I0930 21:12:15.584003   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:15.584051   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.589193   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:15.589260   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:15.629413   73707 cri.go:89] found id: ""
	I0930 21:12:15.629440   73707 logs.go:276] 0 containers: []
	W0930 21:12:15.629449   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:15.629454   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:15.629506   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:15.670129   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:15.670160   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:15.670166   73707 cri.go:89] found id: ""
	I0930 21:12:15.670175   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:15.670237   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.674227   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:15.678252   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:15.678276   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:15.758280   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:15.758319   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:15.778191   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:15.778222   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:15.930379   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:15.930422   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:15.966732   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:15.966759   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:16.004304   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:16.004337   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:16.043705   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:16.043733   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:16.600173   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:16.600210   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:16.651837   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:16.651868   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:16.695122   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:16.695155   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:16.737622   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:16.737671   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:16.772913   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:16.772944   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:16.808196   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:16.808224   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.368150   73707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:19.385771   73707 api_server.go:72] duration metric: took 4m14.101602019s to wait for apiserver process to appear ...
	I0930 21:12:19.385798   73707 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:12:19.385831   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:19.385889   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:19.421325   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:19.421354   73707 cri.go:89] found id: ""
	I0930 21:12:19.421364   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:19.421426   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.428045   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:19.428107   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:19.466034   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:19.466054   73707 cri.go:89] found id: ""
	I0930 21:12:19.466061   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:19.466102   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.470155   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:19.470222   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:19.504774   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:19.504799   73707 cri.go:89] found id: ""
	I0930 21:12:19.504806   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:19.504869   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.509044   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:19.509134   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:19.544204   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:19.544228   73707 cri.go:89] found id: ""
	I0930 21:12:19.544235   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:19.544293   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.549103   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:19.549194   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:19.591381   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:19.591416   73707 cri.go:89] found id: ""
	I0930 21:12:19.591425   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:19.591472   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.595522   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:19.595621   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:19.634816   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.634841   73707 cri.go:89] found id: ""
	I0930 21:12:19.634850   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:19.634894   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.639391   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:19.639450   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:19.675056   73707 cri.go:89] found id: ""
	I0930 21:12:19.675084   73707 logs.go:276] 0 containers: []
	W0930 21:12:19.675095   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:19.675102   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:19.675159   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:19.708641   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:19.708666   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:19.708672   73707 cri.go:89] found id: ""
	I0930 21:12:19.708682   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:19.708738   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.712636   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:19.716653   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:19.716680   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:19.785159   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:19.785203   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:19.823462   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:19.823490   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:19.856776   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:19.856808   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:19.893919   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:19.893948   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:19.930932   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:19.930978   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:19.988120   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:19.988164   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:20.027576   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:20.027618   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:20.041523   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:20.041557   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:20.157598   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:20.157630   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:20.213353   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:20.213384   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:20.254502   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:20.254533   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:17.824584   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:12:17.824623   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:17.862613   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:17.862643   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:17.915954   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:17.915992   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:17.929824   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:17.929853   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:12:17.999697   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:12:20.500449   73900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:12:20.514042   73900 kubeadm.go:597] duration metric: took 4m1.91059878s to restartPrimaryControlPlane
	W0930 21:12:20.514119   73900 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 21:12:20.514158   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:12:21.675376   73900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.161176988s)
	I0930 21:12:21.675465   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:21.689467   73900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:12:21.698504   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:12:21.708418   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:12:21.708437   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:12:21.708483   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:12:21.716960   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:12:21.717019   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:12:21.727610   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:12:21.736212   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:12:21.736275   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:12:21.745512   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.754299   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:12:21.754366   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:12:21.763724   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:12:21.772521   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:12:21.772595   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:12:21.782980   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:12:21.850463   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:12:21.850558   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:12:21.991521   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:12:21.991706   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:12:21.991849   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:12:22.174876   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:12:22.177037   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:12:22.177155   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:12:22.177253   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:12:22.177379   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:12:22.178789   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:12:22.178860   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:12:22.178907   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:12:22.178961   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:12:22.179017   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:12:22.179139   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:12:22.179247   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:12:22.179310   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:12:22.179398   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:12:22.253256   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:12:22.661237   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:12:22.947987   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:12:23.170995   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:12:23.184583   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:12:23.185770   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:12:23.185813   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:12:23.334769   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:12:21.069777   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:23.070328   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:20.696951   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:20.696989   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:23.236734   73707 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8444/healthz ...
	I0930 21:12:23.241215   73707 api_server.go:279] https://192.168.50.2:8444/healthz returned 200:
	ok
	I0930 21:12:23.242629   73707 api_server.go:141] control plane version: v1.31.1
	I0930 21:12:23.242651   73707 api_server.go:131] duration metric: took 3.856847284s to wait for apiserver health ...
	I0930 21:12:23.242660   73707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:12:23.242680   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:12:23.242724   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:12:23.279601   73707 cri.go:89] found id: "f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:23.279626   73707 cri.go:89] found id: ""
	I0930 21:12:23.279633   73707 logs.go:276] 1 containers: [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140]
	I0930 21:12:23.279692   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.283900   73707 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:12:23.283977   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:12:23.320360   73707 cri.go:89] found id: "7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:23.320397   73707 cri.go:89] found id: ""
	I0930 21:12:23.320410   73707 logs.go:276] 1 containers: [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711]
	I0930 21:12:23.320472   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.324745   73707 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:12:23.324825   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:12:23.368001   73707 cri.go:89] found id: "ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:23.368024   73707 cri.go:89] found id: ""
	I0930 21:12:23.368034   73707 logs.go:276] 1 containers: [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49]
	I0930 21:12:23.368095   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.372001   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:12:23.372077   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:12:23.408203   73707 cri.go:89] found id: "0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:23.408234   73707 cri.go:89] found id: ""
	I0930 21:12:23.408242   73707 logs.go:276] 1 containers: [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4]
	I0930 21:12:23.408299   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.412328   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:12:23.412397   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:12:23.462142   73707 cri.go:89] found id: "5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:23.462173   73707 cri.go:89] found id: ""
	I0930 21:12:23.462183   73707 logs.go:276] 1 containers: [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8]
	I0930 21:12:23.462247   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.466257   73707 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:12:23.466336   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:12:23.509075   73707 cri.go:89] found id: "d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:23.509098   73707 cri.go:89] found id: ""
	I0930 21:12:23.509109   73707 logs.go:276] 1 containers: [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8]
	I0930 21:12:23.509169   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.513362   73707 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:12:23.513441   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:12:23.553711   73707 cri.go:89] found id: ""
	I0930 21:12:23.553738   73707 logs.go:276] 0 containers: []
	W0930 21:12:23.553746   73707 logs.go:278] No container was found matching "kindnet"
	I0930 21:12:23.553752   73707 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0930 21:12:23.553797   73707 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 21:12:23.599596   73707 cri.go:89] found id: "3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:23.599629   73707 cri.go:89] found id: "1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:23.599635   73707 cri.go:89] found id: ""
	I0930 21:12:23.599644   73707 logs.go:276] 2 containers: [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342]
	I0930 21:12:23.599699   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.603589   73707 ssh_runner.go:195] Run: which crictl
	I0930 21:12:23.607827   73707 logs.go:123] Gathering logs for dmesg ...
	I0930 21:12:23.607855   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:12:23.621046   73707 logs.go:123] Gathering logs for etcd [7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711] ...
	I0930 21:12:23.621069   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e53b1ee3c16bdddacbdb26e4441c3799b61c50bb31e749d86a320d0f2cac711"
	I0930 21:12:23.664703   73707 logs.go:123] Gathering logs for storage-provisioner [3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd] ...
	I0930 21:12:23.664735   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f81706851d1c407485d342cdafba4ac86d45339bbf5bef49f7762340132fabd"
	I0930 21:12:23.700614   73707 logs.go:123] Gathering logs for kube-scheduler [0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4] ...
	I0930 21:12:23.700644   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a84556ba10731a149ea6ef632e127b49de7ada57251cf0e2e31c9b9f924c8e4"
	I0930 21:12:23.738113   73707 logs.go:123] Gathering logs for kube-proxy [5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8] ...
	I0930 21:12:23.738143   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e4ebb7ceb7e6d99ec8e1a785c0f46a2b20f712bd0853437f0770312dfa79de8"
	I0930 21:12:23.775706   73707 logs.go:123] Gathering logs for kube-controller-manager [d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8] ...
	I0930 21:12:23.775733   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1119782e608c5718c8630f28541cbbdc0dbedf3fea59112ab5e3a85cc628fa8"
	I0930 21:12:23.840419   73707 logs.go:123] Gathering logs for storage-provisioner [1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342] ...
	I0930 21:12:23.840454   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1822eaafdd4d932b5a3c51aa91ae3b7f3eb419adf2cf3db46fd50a7615e56342"
	I0930 21:12:23.876827   73707 logs.go:123] Gathering logs for kubelet ...
	I0930 21:12:23.876860   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:12:23.943636   73707 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:12:23.943675   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 21:12:24.052729   73707 logs.go:123] Gathering logs for kube-apiserver [f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140] ...
	I0930 21:12:24.052763   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f197afcf3b28be59ec8d9a2971ea16ddd5dc8f1c93d1871c7d16f414f1f58140"
	I0930 21:12:24.106526   73707 logs.go:123] Gathering logs for coredns [ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49] ...
	I0930 21:12:24.106556   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec71e052062dcb70cd6fb7b71a727f8e85ae46fe07ba21c7e6c9d2f835faef49"
	I0930 21:12:24.146914   73707 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:12:24.146941   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:12:24.527753   73707 logs.go:123] Gathering logs for container status ...
	I0930 21:12:24.527804   73707 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 21:12:27.077689   73707 system_pods.go:59] 8 kube-system pods found
	I0930 21:12:27.077721   73707 system_pods.go:61] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running
	I0930 21:12:27.077726   73707 system_pods.go:61] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running
	I0930 21:12:27.077731   73707 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running
	I0930 21:12:27.077739   73707 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running
	I0930 21:12:27.077744   73707 system_pods.go:61] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running
	I0930 21:12:27.077749   73707 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running
	I0930 21:12:27.077759   73707 system_pods.go:61] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:27.077766   73707 system_pods.go:61] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running
	I0930 21:12:27.077774   73707 system_pods.go:74] duration metric: took 3.835107861s to wait for pod list to return data ...
	I0930 21:12:27.077783   73707 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:12:27.082269   73707 default_sa.go:45] found service account: "default"
	I0930 21:12:27.082292   73707 default_sa.go:55] duration metric: took 4.502111ms for default service account to be created ...
	I0930 21:12:27.082299   73707 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:12:27.086738   73707 system_pods.go:86] 8 kube-system pods found
	I0930 21:12:27.086764   73707 system_pods.go:89] "coredns-7c65d6cfc9-hdjjq" [5672cd58-4d3f-409e-b279-f4027fe09aea] Running
	I0930 21:12:27.086770   73707 system_pods.go:89] "etcd-default-k8s-diff-port-291511" [228b61a2-a110-4029-96e5-950e44f5290f] Running
	I0930 21:12:27.086775   73707 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-291511" [a6991ee1-6c61-49b5-adb5-fb6175386bfe] Running
	I0930 21:12:27.086781   73707 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-291511" [4ba3f2a2-ac38-4483-bbd0-f21d934d97d1] Running
	I0930 21:12:27.086784   73707 system_pods.go:89] "kube-proxy-kwp22" [87e5295f-3aaa-4222-a61a-942354f79f9b] Running
	I0930 21:12:27.086788   73707 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-291511" [b03fc09c-ddee-4593-9be5-8117892932f5] Running
	I0930 21:12:27.086796   73707 system_pods.go:89] "metrics-server-6867b74b74-txb2j" [6f0ec8d2-5528-4f70-807c-42cbabae23bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:12:27.086803   73707 system_pods.go:89] "storage-provisioner" [32053345-1ff9-45b1-aa70-e746926b305d] Running
	I0930 21:12:27.086811   73707 system_pods.go:126] duration metric: took 4.506701ms to wait for k8s-apps to be running ...
	I0930 21:12:27.086820   73707 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:12:27.086868   73707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:12:27.102286   73707 system_svc.go:56] duration metric: took 15.455734ms WaitForService to wait for kubelet
	I0930 21:12:27.102325   73707 kubeadm.go:582] duration metric: took 4m21.818162682s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:12:27.102346   73707 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:12:27.105332   73707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:12:27.105354   73707 node_conditions.go:123] node cpu capacity is 2
	I0930 21:12:27.105364   73707 node_conditions.go:105] duration metric: took 3.013328ms to run NodePressure ...
	I0930 21:12:27.105375   73707 start.go:241] waiting for startup goroutines ...
	I0930 21:12:27.105382   73707 start.go:246] waiting for cluster config update ...
	I0930 21:12:27.105393   73707 start.go:255] writing updated cluster config ...
	I0930 21:12:27.105669   73707 ssh_runner.go:195] Run: rm -f paused
	I0930 21:12:27.156804   73707 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:12:27.158887   73707 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-291511" cluster and "default" namespace by default
	I0930 21:12:23.336604   73900 out.go:235]   - Booting up control plane ...
	I0930 21:12:23.336747   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:12:23.345737   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:12:23.346784   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:12:23.347559   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:12:23.351009   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:12:25.568654   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:27.569042   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:29.570978   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:32.069065   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:34.069347   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:36.568228   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:38.569351   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:40.569552   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:43.069456   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:45.569254   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:47.569647   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:49.569997   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:52.069284   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:54.069870   73256 pod_ready.go:103] pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace has status "Ready":"False"
	I0930 21:12:54.563572   73256 pod_ready.go:82] duration metric: took 4m0.000782781s for pod "metrics-server-6867b74b74-hkp9m" in "kube-system" namespace to be "Ready" ...
	E0930 21:12:54.563605   73256 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 21:12:54.563620   73256 pod_ready.go:39] duration metric: took 4m9.49309261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:12:54.563643   73256 kubeadm.go:597] duration metric: took 4m18.399318281s to restartPrimaryControlPlane
	W0930 21:12:54.563698   73256 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 21:12:54.563721   73256 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:13:03.351822   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:13:03.352632   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:03.352833   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:08.353230   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:08.353429   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:20.634441   73256 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.070691776s)
	I0930 21:13:20.634529   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:13:20.650312   73256 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 21:13:20.661782   73256 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:13:20.671436   73256 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:13:20.671463   73256 kubeadm.go:157] found existing configuration files:
	
	I0930 21:13:20.671504   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:13:20.681860   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:13:20.681934   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:13:20.692529   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:13:20.701507   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:13:20.701585   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:13:20.711211   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:13:20.721856   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:13:20.721928   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:13:20.733194   73256 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:13:20.743887   73256 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:13:20.743955   73256 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:13:20.753546   73256 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:13:20.799739   73256 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 21:13:20.799812   73256 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:13:20.906464   73256 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:13:20.906569   73256 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:13:20.906647   73256 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 21:13:20.919451   73256 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:13:20.921440   73256 out.go:235]   - Generating certificates and keys ...
	I0930 21:13:20.921550   73256 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:13:20.921645   73256 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:13:20.921758   73256 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:13:20.921845   73256 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:13:20.921945   73256 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:13:20.922021   73256 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:13:20.922117   73256 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:13:20.922190   73256 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:13:20.922262   73256 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:13:20.922336   73256 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:13:20.922370   73256 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:13:20.922459   73256 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:13:21.079731   73256 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:13:21.214199   73256 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 21:13:21.344405   73256 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:13:21.605006   73256 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:13:21.718432   73256 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:13:21.718967   73256 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:13:21.723434   73256 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:13:18.354150   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:18.354468   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:13:21.725304   73256 out.go:235]   - Booting up control plane ...
	I0930 21:13:21.725435   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:13:21.725526   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:13:21.725637   73256 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:13:21.743582   73256 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:13:21.749533   73256 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:13:21.749605   73256 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:13:21.873716   73256 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 21:13:21.873867   73256 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 21:13:22.375977   73256 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.402537ms
	I0930 21:13:22.376098   73256 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 21:13:27.379510   73256 kubeadm.go:310] [api-check] The API server is healthy after 5.001265494s
	I0930 21:13:27.392047   73256 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 21:13:27.409550   73256 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 21:13:27.447693   73256 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 21:13:27.447896   73256 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-256103 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 21:13:27.462338   73256 kubeadm.go:310] [bootstrap-token] Using token: k5ffj3.6sqmy7prwrlhrg7s
	I0930 21:13:27.463967   73256 out.go:235]   - Configuring RBAC rules ...
	I0930 21:13:27.464076   73256 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 21:13:27.472107   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 21:13:27.481172   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 21:13:27.485288   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 21:13:27.492469   73256 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 21:13:27.496822   73256 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 21:13:27.789372   73256 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 21:13:28.210679   73256 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 21:13:28.784869   73256 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 21:13:28.785859   73256 kubeadm.go:310] 
	I0930 21:13:28.785954   73256 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 21:13:28.785967   73256 kubeadm.go:310] 
	I0930 21:13:28.786045   73256 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 21:13:28.786077   73256 kubeadm.go:310] 
	I0930 21:13:28.786121   73256 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 21:13:28.786219   73256 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 21:13:28.786286   73256 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 21:13:28.786304   73256 kubeadm.go:310] 
	I0930 21:13:28.786395   73256 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 21:13:28.786405   73256 kubeadm.go:310] 
	I0930 21:13:28.786464   73256 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 21:13:28.786474   73256 kubeadm.go:310] 
	I0930 21:13:28.786546   73256 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 21:13:28.786658   73256 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 21:13:28.786754   73256 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 21:13:28.786763   73256 kubeadm.go:310] 
	I0930 21:13:28.786870   73256 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 21:13:28.786991   73256 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 21:13:28.787000   73256 kubeadm.go:310] 
	I0930 21:13:28.787122   73256 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k5ffj3.6sqmy7prwrlhrg7s \
	I0930 21:13:28.787240   73256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a \
	I0930 21:13:28.787274   73256 kubeadm.go:310] 	--control-plane 
	I0930 21:13:28.787290   73256 kubeadm.go:310] 
	I0930 21:13:28.787415   73256 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 21:13:28.787425   73256 kubeadm.go:310] 
	I0930 21:13:28.787547   73256 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k5ffj3.6sqmy7prwrlhrg7s \
	I0930 21:13:28.787713   73256 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:352dfd8425761db90b29426d7b39e0cd78050576f8c3b54f4769ee4dc405f73a 
	I0930 21:13:28.788805   73256 kubeadm.go:310] W0930 21:13:20.776526    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:13:28.789058   73256 kubeadm.go:310] W0930 21:13:20.777323    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 21:13:28.789158   73256 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:13:28.789178   73256 cni.go:84] Creating CNI manager for ""
	I0930 21:13:28.789187   73256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 21:13:28.791049   73256 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 21:13:28.792381   73256 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 21:13:28.802872   73256 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 21:13:28.819952   73256 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 21:13:28.820054   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:28.820070   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-256103 minikube.k8s.io/updated_at=2024_09_30T21_13_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a8859b2b27cf43a9dac1b7d9f7a3a2e21d50b022 minikube.k8s.io/name=embed-certs-256103 minikube.k8s.io/primary=true
	I0930 21:13:28.859770   73256 ops.go:34] apiserver oom_adj: -16
	I0930 21:13:29.026274   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:29.526992   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:30.026700   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:30.526962   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:31.027165   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:31.526632   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:32.027019   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:32.526522   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:33.026739   73256 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 21:13:33.116028   73256 kubeadm.go:1113] duration metric: took 4.296036786s to wait for elevateKubeSystemPrivileges
	I0930 21:13:33.116067   73256 kubeadm.go:394] duration metric: took 4m57.005787187s to StartCluster
	I0930 21:13:33.116088   73256 settings.go:142] acquiring lock: {Name:mkd7167558dcf9e4e74b1ba89686af996800a9f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:13:33.116175   73256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 21:13:33.117855   73256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/kubeconfig: {Name:mk7f7a50a5e604547351c0e84ead36c7161a29de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 21:13:33.118142   73256 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 21:13:33.118263   73256 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 21:13:33.118420   73256 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-256103"
	I0930 21:13:33.118373   73256 config.go:182] Loaded profile config "embed-certs-256103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 21:13:33.118446   73256 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-256103"
	I0930 21:13:33.118442   73256 addons.go:69] Setting default-storageclass=true in profile "embed-certs-256103"
	W0930 21:13:33.118453   73256 addons.go:243] addon storage-provisioner should already be in state true
	I0930 21:13:33.118464   73256 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-256103"
	I0930 21:13:33.118482   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.118515   73256 addons.go:69] Setting metrics-server=true in profile "embed-certs-256103"
	I0930 21:13:33.118554   73256 addons.go:234] Setting addon metrics-server=true in "embed-certs-256103"
	W0930 21:13:33.118564   73256 addons.go:243] addon metrics-server should already be in state true
	I0930 21:13:33.118594   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.118807   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118840   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.118880   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118926   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.118941   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.118965   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.120042   73256 out.go:177] * Verifying Kubernetes components...
	I0930 21:13:33.121706   73256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 21:13:33.136554   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0930 21:13:33.137096   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.137304   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0930 21:13:33.137664   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.137696   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.137789   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.138013   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.138176   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.138317   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.138336   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.139163   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37389
	I0930 21:13:33.139176   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.139733   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.139903   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.139955   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.140284   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.140311   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.140780   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.141336   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.141375   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.141814   73256 addons.go:234] Setting addon default-storageclass=true in "embed-certs-256103"
	W0930 21:13:33.141832   73256 addons.go:243] addon default-storageclass should already be in state true
	I0930 21:13:33.141857   73256 host.go:66] Checking if "embed-certs-256103" exists ...
	I0930 21:13:33.142143   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.142177   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.161937   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0930 21:13:33.162096   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33657
	I0930 21:13:33.162249   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42531
	I0930 21:13:33.162491   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.162536   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.162837   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.163017   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163028   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163030   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163045   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163254   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.163265   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.163362   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.163417   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.163864   73256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 21:13:33.163899   73256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 21:13:33.164101   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.164154   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.164356   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.166460   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.166673   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.168464   73256 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 21:13:33.168631   73256 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 21:13:33.169822   73256 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:13:33.169840   73256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 21:13:33.169857   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.169937   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 21:13:33.169947   73256 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 21:13:33.169963   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.174613   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.174653   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175236   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.175265   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175372   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.175405   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.175667   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.176048   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.176051   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.176299   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.176299   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.176476   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.176684   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.176685   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.180520   73256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I0930 21:13:33.180968   73256 main.go:141] libmachine: () Calling .GetVersion
	I0930 21:13:33.181564   73256 main.go:141] libmachine: Using API Version  1
	I0930 21:13:33.181588   73256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 21:13:33.181938   73256 main.go:141] libmachine: () Calling .GetMachineName
	I0930 21:13:33.182136   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetState
	I0930 21:13:33.183803   73256 main.go:141] libmachine: (embed-certs-256103) Calling .DriverName
	I0930 21:13:33.184001   73256 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 21:13:33.184017   73256 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 21:13:33.184035   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHHostname
	I0930 21:13:33.186565   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.186964   73256 main.go:141] libmachine: (embed-certs-256103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:01:01", ip: ""} in network mk-embed-certs-256103: {Iface:virbr1 ExpiryTime:2024-09-30 22:08:21 +0000 UTC Type:0 Mac:52:54:00:7a:01:01 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:embed-certs-256103 Clientid:01:52:54:00:7a:01:01}
	I0930 21:13:33.186996   73256 main.go:141] libmachine: (embed-certs-256103) DBG | domain embed-certs-256103 has defined IP address 192.168.39.90 and MAC address 52:54:00:7a:01:01 in network mk-embed-certs-256103
	I0930 21:13:33.187311   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHPort
	I0930 21:13:33.187481   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHKeyPath
	I0930 21:13:33.187797   73256 main.go:141] libmachine: (embed-certs-256103) Calling .GetSSHUsername
	I0930 21:13:33.187937   73256 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/embed-certs-256103/id_rsa Username:docker}
	I0930 21:13:33.337289   73256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 21:13:33.360186   73256 node_ready.go:35] waiting up to 6m0s for node "embed-certs-256103" to be "Ready" ...
	I0930 21:13:33.372799   73256 node_ready.go:49] node "embed-certs-256103" has status "Ready":"True"
	I0930 21:13:33.372828   73256 node_ready.go:38] duration metric: took 12.601736ms for node "embed-certs-256103" to be "Ready" ...
	I0930 21:13:33.372837   73256 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:13:33.379694   73256 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:33.462144   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 21:13:33.500072   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 21:13:33.500102   73256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 21:13:33.524789   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 21:13:33.548931   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 21:13:33.548955   73256 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 21:13:33.604655   73256 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:13:33.604682   73256 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 21:13:33.648687   73256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 21:13:34.533493   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.008666954s)
	I0930 21:13:34.533555   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.533566   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.533856   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.533870   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.533884   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.533892   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.533900   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.534108   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.534126   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.534149   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.535651   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.073475648s)
	I0930 21:13:34.535695   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.535706   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.535926   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.536001   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.536014   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.536030   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.535981   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.537450   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.537470   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.537480   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.564363   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.564394   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.564715   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.564739   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968266   73256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.319532564s)
	I0930 21:13:34.968330   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.968350   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.968642   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.968665   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968674   73256 main.go:141] libmachine: Making call to close driver server
	I0930 21:13:34.968673   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.968681   73256 main.go:141] libmachine: (embed-certs-256103) Calling .Close
	I0930 21:13:34.968944   73256 main.go:141] libmachine: Successfully made call to close driver server
	I0930 21:13:34.968969   73256 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 21:13:34.968973   73256 main.go:141] libmachine: (embed-certs-256103) DBG | Closing plugin on server side
	I0930 21:13:34.968979   73256 addons.go:475] Verifying addon metrics-server=true in "embed-certs-256103"
	I0930 21:13:34.970656   73256 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0930 21:13:34.971966   73256 addons.go:510] duration metric: took 1.853709741s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0930 21:13:35.387687   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:37.388374   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:39.886425   73256 pod_ready.go:103] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"False"
	I0930 21:13:41.885713   73256 pod_ready.go:93] pod "etcd-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.885737   73256 pod_ready.go:82] duration metric: took 8.506004979s for pod "etcd-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.885746   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.891032   73256 pod_ready.go:93] pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.891052   73256 pod_ready.go:82] duration metric: took 5.300379ms for pod "kube-apiserver-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.891061   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.895332   73256 pod_ready.go:93] pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.895349   73256 pod_ready.go:82] duration metric: took 4.282199ms for pod "kube-controller-manager-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.895357   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-glbsg" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.899518   73256 pod_ready.go:93] pod "kube-proxy-glbsg" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.899556   73256 pod_ready.go:82] duration metric: took 4.191815ms for pod "kube-proxy-glbsg" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.899567   73256 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.904184   73256 pod_ready.go:93] pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace has status "Ready":"True"
	I0930 21:13:41.904203   73256 pod_ready.go:82] duration metric: took 4.628533ms for pod "kube-scheduler-embed-certs-256103" in "kube-system" namespace to be "Ready" ...
	I0930 21:13:41.904209   73256 pod_ready.go:39] duration metric: took 8.531361398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 21:13:41.904221   73256 api_server.go:52] waiting for apiserver process to appear ...
	I0930 21:13:41.904262   73256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 21:13:41.919570   73256 api_server.go:72] duration metric: took 8.801387692s to wait for apiserver process to appear ...
	I0930 21:13:41.919591   73256 api_server.go:88] waiting for apiserver healthz status ...
	I0930 21:13:41.919607   73256 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0930 21:13:41.923810   73256 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I0930 21:13:41.924633   73256 api_server.go:141] control plane version: v1.31.1
	I0930 21:13:41.924651   73256 api_server.go:131] duration metric: took 5.054857ms to wait for apiserver health ...
	I0930 21:13:41.924659   73256 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 21:13:42.086431   73256 system_pods.go:59] 9 kube-system pods found
	I0930 21:13:42.086468   73256 system_pods.go:61] "coredns-7c65d6cfc9-gt5tt" [165faaf0-866c-4097-9bdb-ed58fe8d7395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.086480   73256 system_pods.go:61] "coredns-7c65d6cfc9-sgsbn" [c97fdb50-c6a0-4ef8-8c01-ea45ed18b72a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.086488   73256 system_pods.go:61] "etcd-embed-certs-256103" [6aac0706-7dbd-4655-b261-68877299d81a] Running
	I0930 21:13:42.086494   73256 system_pods.go:61] "kube-apiserver-embed-certs-256103" [6c8e3157-ec97-4a85-8947-ca7541c19b1c] Running
	I0930 21:13:42.086500   73256 system_pods.go:61] "kube-controller-manager-embed-certs-256103" [1e3f76d1-d343-4127-aad9-8a5a8e589a43] Running
	I0930 21:13:42.086505   73256 system_pods.go:61] "kube-proxy-glbsg" [f68e378f-ce0f-4603-bd8e-93334f04f7a7] Running
	I0930 21:13:42.086510   73256 system_pods.go:61] "kube-scheduler-embed-certs-256103" [29f55c6f-9603-4cd2-a798-0ff2362b7607] Running
	I0930 21:13:42.086518   73256 system_pods.go:61] "metrics-server-6867b74b74-5mhkh" [470424ec-bb66-4d62-904d-0d4ad93fa5bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:13:42.086525   73256 system_pods.go:61] "storage-provisioner" [a07a5a12-7420-4b57-b79d-982f4bb48232] Running
	I0930 21:13:42.086538   73256 system_pods.go:74] duration metric: took 161.870121ms to wait for pod list to return data ...
	I0930 21:13:42.086559   73256 default_sa.go:34] waiting for default service account to be created ...
	I0930 21:13:42.284282   73256 default_sa.go:45] found service account: "default"
	I0930 21:13:42.284307   73256 default_sa.go:55] duration metric: took 197.73827ms for default service account to be created ...
	I0930 21:13:42.284316   73256 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 21:13:42.486445   73256 system_pods.go:86] 9 kube-system pods found
	I0930 21:13:42.486478   73256 system_pods.go:89] "coredns-7c65d6cfc9-gt5tt" [165faaf0-866c-4097-9bdb-ed58fe8d7395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.486489   73256 system_pods.go:89] "coredns-7c65d6cfc9-sgsbn" [c97fdb50-c6a0-4ef8-8c01-ea45ed18b72a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 21:13:42.486497   73256 system_pods.go:89] "etcd-embed-certs-256103" [6aac0706-7dbd-4655-b261-68877299d81a] Running
	I0930 21:13:42.486503   73256 system_pods.go:89] "kube-apiserver-embed-certs-256103" [6c8e3157-ec97-4a85-8947-ca7541c19b1c] Running
	I0930 21:13:42.486509   73256 system_pods.go:89] "kube-controller-manager-embed-certs-256103" [1e3f76d1-d343-4127-aad9-8a5a8e589a43] Running
	I0930 21:13:42.486513   73256 system_pods.go:89] "kube-proxy-glbsg" [f68e378f-ce0f-4603-bd8e-93334f04f7a7] Running
	I0930 21:13:42.486518   73256 system_pods.go:89] "kube-scheduler-embed-certs-256103" [29f55c6f-9603-4cd2-a798-0ff2362b7607] Running
	I0930 21:13:42.486526   73256 system_pods.go:89] "metrics-server-6867b74b74-5mhkh" [470424ec-bb66-4d62-904d-0d4ad93fa5bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 21:13:42.486533   73256 system_pods.go:89] "storage-provisioner" [a07a5a12-7420-4b57-b79d-982f4bb48232] Running
	I0930 21:13:42.486542   73256 system_pods.go:126] duration metric: took 202.220435ms to wait for k8s-apps to be running ...
	I0930 21:13:42.486552   73256 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 21:13:42.486601   73256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:13:42.501286   73256 system_svc.go:56] duration metric: took 14.699273ms WaitForService to wait for kubelet
	I0930 21:13:42.501315   73256 kubeadm.go:582] duration metric: took 9.38313627s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 21:13:42.501332   73256 node_conditions.go:102] verifying NodePressure condition ...
	I0930 21:13:42.685282   73256 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 21:13:42.685314   73256 node_conditions.go:123] node cpu capacity is 2
	I0930 21:13:42.685326   73256 node_conditions.go:105] duration metric: took 183.989963ms to run NodePressure ...
	I0930 21:13:42.685346   73256 start.go:241] waiting for startup goroutines ...
	I0930 21:13:42.685356   73256 start.go:246] waiting for cluster config update ...
	I0930 21:13:42.685371   73256 start.go:255] writing updated cluster config ...
	I0930 21:13:42.685664   73256 ssh_runner.go:195] Run: rm -f paused
	I0930 21:13:42.734778   73256 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 21:13:42.736658   73256 out.go:177] * Done! kubectl is now configured to use "embed-certs-256103" cluster and "default" namespace by default
	I0930 21:13:38.355123   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:13:38.355330   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357098   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:14:18.357396   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:14:18.357419   73900 kubeadm.go:310] 
	I0930 21:14:18.357473   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:14:18.357541   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:14:18.357554   73900 kubeadm.go:310] 
	I0930 21:14:18.357609   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:14:18.357659   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:14:18.357801   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:14:18.357817   73900 kubeadm.go:310] 
	I0930 21:14:18.357964   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:14:18.357996   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:14:18.358028   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:14:18.358039   73900 kubeadm.go:310] 
	I0930 21:14:18.358174   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:14:18.358318   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:14:18.358331   73900 kubeadm.go:310] 
	I0930 21:14:18.358510   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:14:18.358646   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:14:18.358764   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:14:18.358866   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:14:18.358882   73900 kubeadm.go:310] 
	I0930 21:14:18.359454   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:14:18.359595   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:14:18.359681   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0930 21:14:18.359797   73900 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0930 21:14:18.359841   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 21:14:18.820244   73900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 21:14:18.834938   73900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 21:14:18.844779   73900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 21:14:18.844803   73900 kubeadm.go:157] found existing configuration files:
	
	I0930 21:14:18.844856   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 21:14:18.853738   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 21:14:18.853811   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 21:14:18.863366   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 21:14:18.872108   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 21:14:18.872164   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 21:14:18.881818   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.890916   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 21:14:18.890969   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 21:14:18.900075   73900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 21:14:18.908449   73900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 21:14:18.908520   73900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 21:14:18.917163   73900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 21:14:18.983181   73900 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 21:14:18.983233   73900 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 21:14:19.121356   73900 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 21:14:19.121545   73900 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 21:14:19.121674   73900 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 21:14:19.306639   73900 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 21:14:19.309593   73900 out.go:235]   - Generating certificates and keys ...
	I0930 21:14:19.309683   73900 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 21:14:19.309748   73900 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 21:14:19.309870   73900 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 21:14:19.309957   73900 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 21:14:19.310040   73900 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 21:14:19.310119   73900 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 21:14:19.310209   73900 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 21:14:19.310292   73900 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 21:14:19.310404   73900 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 21:14:19.310511   73900 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 21:14:19.310567   73900 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 21:14:19.310654   73900 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 21:14:19.453872   73900 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 21:14:19.621232   73900 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 21:14:19.797694   73900 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 21:14:19.886897   73900 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 21:14:19.909016   73900 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 21:14:19.910536   73900 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 21:14:19.910617   73900 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 21:14:20.052878   73900 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 21:14:20.054739   73900 out.go:235]   - Booting up control plane ...
	I0930 21:14:20.054881   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 21:14:20.068419   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 21:14:20.068512   73900 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 21:14:20.068697   73900 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 21:14:20.072015   73900 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 21:15:00.073988   73900 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 21:15:00.074795   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:00.075068   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:05.075810   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:05.076061   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:15.076695   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:15.076928   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:15:35.077652   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:15:35.077862   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.076816   73900 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 21:16:15.077063   73900 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 21:16:15.077082   73900 kubeadm.go:310] 
	I0930 21:16:15.077136   73900 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 21:16:15.077188   73900 kubeadm.go:310] 		timed out waiting for the condition
	I0930 21:16:15.077198   73900 kubeadm.go:310] 
	I0930 21:16:15.077246   73900 kubeadm.go:310] 	This error is likely caused by:
	I0930 21:16:15.077298   73900 kubeadm.go:310] 		- The kubelet is not running
	I0930 21:16:15.077425   73900 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 21:16:15.077442   73900 kubeadm.go:310] 
	I0930 21:16:15.077605   73900 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 21:16:15.077651   73900 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 21:16:15.077710   73900 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 21:16:15.077718   73900 kubeadm.go:310] 
	I0930 21:16:15.077851   73900 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 21:16:15.077997   73900 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 21:16:15.078013   73900 kubeadm.go:310] 
	I0930 21:16:15.078143   73900 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 21:16:15.078229   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 21:16:15.078309   73900 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 21:16:15.078419   73900 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 21:16:15.078431   73900 kubeadm.go:310] 
	I0930 21:16:15.079235   73900 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 21:16:15.079365   73900 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 21:16:15.079442   73900 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0930 21:16:15.079572   73900 kubeadm.go:394] duration metric: took 7m56.529269567s to StartCluster
	I0930 21:16:15.079639   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 21:16:15.079713   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 21:16:15.122057   73900 cri.go:89] found id: ""
	I0930 21:16:15.122086   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.122098   73900 logs.go:278] No container was found matching "kube-apiserver"
	I0930 21:16:15.122105   73900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 21:16:15.122166   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 21:16:15.156244   73900 cri.go:89] found id: ""
	I0930 21:16:15.156278   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.156289   73900 logs.go:278] No container was found matching "etcd"
	I0930 21:16:15.156297   73900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 21:16:15.156357   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 21:16:15.188952   73900 cri.go:89] found id: ""
	I0930 21:16:15.188977   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.188989   73900 logs.go:278] No container was found matching "coredns"
	I0930 21:16:15.188996   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 21:16:15.189058   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 21:16:15.219400   73900 cri.go:89] found id: ""
	I0930 21:16:15.219427   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.219435   73900 logs.go:278] No container was found matching "kube-scheduler"
	I0930 21:16:15.219441   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 21:16:15.219501   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 21:16:15.252049   73900 cri.go:89] found id: ""
	I0930 21:16:15.252078   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.252086   73900 logs.go:278] No container was found matching "kube-proxy"
	I0930 21:16:15.252093   73900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 21:16:15.252150   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 21:16:15.286560   73900 cri.go:89] found id: ""
	I0930 21:16:15.286594   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.286605   73900 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 21:16:15.286614   73900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 21:16:15.286679   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 21:16:15.319140   73900 cri.go:89] found id: ""
	I0930 21:16:15.319178   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.319187   73900 logs.go:278] No container was found matching "kindnet"
	I0930 21:16:15.319192   73900 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 21:16:15.319245   73900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 21:16:15.351299   73900 cri.go:89] found id: ""
	I0930 21:16:15.351322   73900 logs.go:276] 0 containers: []
	W0930 21:16:15.351330   73900 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0930 21:16:15.351339   73900 logs.go:123] Gathering logs for kubelet ...
	I0930 21:16:15.351350   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 21:16:15.402837   73900 logs.go:123] Gathering logs for dmesg ...
	I0930 21:16:15.402882   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 21:16:15.417111   73900 logs.go:123] Gathering logs for describe nodes ...
	I0930 21:16:15.417140   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 21:16:15.492593   73900 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 21:16:15.492614   73900 logs.go:123] Gathering logs for CRI-O ...
	I0930 21:16:15.492627   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 21:16:15.621646   73900 logs.go:123] Gathering logs for container status ...
	I0930 21:16:15.621681   73900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0930 21:16:15.660480   73900 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0930 21:16:15.660528   73900 out.go:270] * 
	W0930 21:16:15.660580   73900 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.660595   73900 out.go:270] * 
	W0930 21:16:15.661387   73900 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 21:16:15.665510   73900 out.go:201] 
	W0930 21:16:15.667332   73900 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 21:16:15.667373   73900 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0930 21:16:15.667390   73900 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0930 21:16:15.668812   73900 out.go:201] 
	
	
	==> CRI-O <==
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.398519054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731648398491021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1856ba4e-d357-4aef-ab3c-8cf27676e4da name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.399336130Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abc75f69-55f6-4cc1-bb79-33037ff47b2e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.399427873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abc75f69-55f6-4cc1-bb79-33037ff47b2e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.399488711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=abc75f69-55f6-4cc1-bb79-33037ff47b2e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.430802448Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b7dec9b-f3fc-4484-ada9-c096106b8e7c name=/runtime.v1.RuntimeService/Version
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.430892365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b7dec9b-f3fc-4484-ada9-c096106b8e7c name=/runtime.v1.RuntimeService/Version
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.431880654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2021c091-f598-4b41-bcbf-38f2fec4d585 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.432309138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731648432282820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2021c091-f598-4b41-bcbf-38f2fec4d585 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.432789719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ce14512-3386-4c47-b8c9-92f51fc95f41 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.432853052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ce14512-3386-4c47-b8c9-92f51fc95f41 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.432890529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5ce14512-3386-4c47-b8c9-92f51fc95f41 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.463222659Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d24426e4-e5df-4d11-bead-781835081ae0 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.463311419Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d24426e4-e5df-4d11-bead-781835081ae0 name=/runtime.v1.RuntimeService/Version
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.464333206Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd703b03-6741-4855-a184-e9e28a574d71 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.464731380Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731648464707346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd703b03-6741-4855-a184-e9e28a574d71 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.465177268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbfe1cf5-a4e4-4312-930e-592691b906d1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.465239810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbfe1cf5-a4e4-4312-930e-592691b906d1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.465295088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cbfe1cf5-a4e4-4312-930e-592691b906d1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.494808998Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0b7b98f-6b58-42b7-9b7c-c409018a40eb name=/runtime.v1.RuntimeService/Version
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.494902260Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0b7b98f-6b58-42b7-9b7c-c409018a40eb name=/runtime.v1.RuntimeService/Version
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.496256740Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d7fb73b-b317-4167-aa5b-578a9b2805bc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.496819895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727731648496789847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d7fb73b-b317-4167-aa5b-578a9b2805bc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.497373396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92bac148-ab4a-43c3-8ec2-d6489ce5c6c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.497419909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92bac148-ab4a-43c3-8ec2-d6489ce5c6c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 21:27:28 old-k8s-version-621406 crio[636]: time="2024-09-30 21:27:28.497454235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=92bac148-ab4a-43c3-8ec2-d6489ce5c6c4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep30 21:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055405] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042801] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.194174] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep30 21:08] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.574996] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.760000] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.059497] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069559] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.192698] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.144274] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.303445] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +6.753345] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.065939] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.694211] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[ +12.297674] kauditd_printk_skb: 46 callbacks suppressed
	[Sep30 21:12] systemd-fstab-generator[5042]: Ignoring "noauto" option for root device
	[Sep30 21:14] systemd-fstab-generator[5322]: Ignoring "noauto" option for root device
	[  +0.065961] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:27:28 up 19 min,  0 users,  load average: 0.00, 0.02, 0.02
	Linux old-k8s-version-621406 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: goroutine 159 [chan receive]:
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedProcessor).run(0xc0006bf730, 0xc0009b6f60)
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009c4530, 0xc0009dc260)
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: goroutine 160 [chan receive]:
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000815560)
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: goroutine 161 [select]:
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c85ef0, 0x4f0ac20, 0xc000425590, 0x1, 0xc0001000c0)
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00057cd20, 0xc0001000c0)
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009c4560, 0xc0009dc320)
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 30 21:27:28 old-k8s-version-621406 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-621406 -n old-k8s-version-621406
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-621406 -n old-k8s-version-621406: exit status 2 (217.397436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-621406" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (127.35s)

                                                
                                    

Test pass (242/311)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.67
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 12.47
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
22 TestOffline 58.78
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 133.1
31 TestAddons/serial/GCPAuth/Namespaces 3.02
35 TestAddons/parallel/InspektorGadget 11.83
38 TestAddons/parallel/CSI 58.7
39 TestAddons/parallel/Headlamp 20.33
40 TestAddons/parallel/CloudSpanner 5.54
41 TestAddons/parallel/LocalPath 56.22
42 TestAddons/parallel/NvidiaDevicePlugin 5.5
43 TestAddons/parallel/Yakd 11.85
44 TestAddons/StoppedEnableDisable 7.56
45 TestCertOptions 46.38
46 TestCertExpiration 291.96
48 TestForceSystemdFlag 47.21
49 TestForceSystemdEnv 71.94
51 TestKVMDriverInstallOrUpdate 4.43
55 TestErrorSpam/setup 43.39
56 TestErrorSpam/start 0.35
57 TestErrorSpam/status 0.74
58 TestErrorSpam/pause 1.61
59 TestErrorSpam/unpause 1.65
60 TestErrorSpam/stop 5.1
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 81.32
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 45.23
67 TestFunctional/serial/KubeContext 0.05
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.06
72 TestFunctional/serial/CacheCmd/cache/add_local 2.17
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 34.7
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.48
83 TestFunctional/serial/LogsFileCmd 1.37
84 TestFunctional/serial/InvalidService 4.55
86 TestFunctional/parallel/ConfigCmd 0.29
87 TestFunctional/parallel/DashboardCmd 11.88
88 TestFunctional/parallel/DryRun 0.28
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 0.83
94 TestFunctional/parallel/ServiceCmdConnect 23.47
95 TestFunctional/parallel/AddonsCmd 0.12
96 TestFunctional/parallel/PersistentVolumeClaim 45.03
98 TestFunctional/parallel/SSHCmd 0.42
99 TestFunctional/parallel/CpCmd 1.31
100 TestFunctional/parallel/MySQL 22.97
101 TestFunctional/parallel/FileSync 0.22
102 TestFunctional/parallel/CertSync 1.3
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
110 TestFunctional/parallel/License 0.57
111 TestFunctional/parallel/Version/short 0.05
112 TestFunctional/parallel/Version/components 0.59
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.38
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
117 TestFunctional/parallel/ImageCommands/ImageBuild 4.62
118 TestFunctional/parallel/ImageCommands/Setup 1.87
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.48
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.56
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.55
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.61
135 TestFunctional/parallel/ImageCommands/ImageRemove 1.16
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.83
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
138 TestFunctional/parallel/ServiceCmd/DeployApp 7.19
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
140 TestFunctional/parallel/ProfileCmd/profile_list 0.38
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
142 TestFunctional/parallel/MountCmd/any-port 8.58
143 TestFunctional/parallel/ServiceCmd/List 0.45
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
146 TestFunctional/parallel/ServiceCmd/Format 0.3
147 TestFunctional/parallel/ServiceCmd/URL 0.32
148 TestFunctional/parallel/MountCmd/specific-port 1.86
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
150 TestFunctional/delete_echo-server_images 0.03
151 TestFunctional/delete_my-image_image 0.02
152 TestFunctional/delete_minikube_cached_images 0.02
156 TestMultiControlPlane/serial/StartCluster 193.13
157 TestMultiControlPlane/serial/DeployApp 7.17
158 TestMultiControlPlane/serial/PingHostFromPods 1.23
159 TestMultiControlPlane/serial/AddWorkerNode 56.93
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
162 TestMultiControlPlane/serial/CopyFile 12.85
168 TestMultiControlPlane/serial/DeleteSecondaryNode 16.72
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
171 TestMultiControlPlane/serial/RestartCluster 318.66
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
173 TestMultiControlPlane/serial/AddSecondaryNode 78.37
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
178 TestJSONOutput/start/Command 90.74
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.71
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.61
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 6.64
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.2
206 TestMainNoArgs 0.04
207 TestMinikubeProfile 90.11
210 TestMountStart/serial/StartWithMountFirst 25.35
211 TestMountStart/serial/VerifyMountFirst 0.36
212 TestMountStart/serial/StartWithMountSecond 26.69
213 TestMountStart/serial/VerifyMountSecond 0.38
214 TestMountStart/serial/DeleteFirst 0.68
215 TestMountStart/serial/VerifyMountPostDelete 0.37
216 TestMountStart/serial/Stop 1.28
217 TestMountStart/serial/RestartStopped 22.94
218 TestMountStart/serial/VerifyMountPostStop 0.36
221 TestMultiNode/serial/FreshStart2Nodes 138.43
222 TestMultiNode/serial/DeployApp2Nodes 6.21
223 TestMultiNode/serial/PingHostFrom2Pods 0.78
224 TestMultiNode/serial/AddNode 48.18
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.58
227 TestMultiNode/serial/CopyFile 7.21
228 TestMultiNode/serial/StopNode 2.26
229 TestMultiNode/serial/StartAfterStop 38.91
231 TestMultiNode/serial/DeleteNode 2.45
233 TestMultiNode/serial/RestartMultiNode 177.65
234 TestMultiNode/serial/ValidateNameConflict 45.73
241 TestScheduledStopUnix 111.5
245 TestRunningBinaryUpgrade 196.51
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
254 TestNoKubernetes/serial/StartWithK8s 94.01
259 TestNetworkPlugins/group/false 2.98
263 TestStoppedBinaryUpgrade/Setup 2.66
264 TestStoppedBinaryUpgrade/Upgrade 143.03
265 TestNoKubernetes/serial/StartWithStopK8s 70.91
266 TestNoKubernetes/serial/Start 43.57
267 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
276 TestNoKubernetes/serial/ProfileList 1.92
277 TestNoKubernetes/serial/Stop 1.82
278 TestNoKubernetes/serial/StartNoArgs 47.98
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
281 TestPause/serial/Start 98.02
282 TestNetworkPlugins/group/auto/Start 120.19
284 TestNetworkPlugins/group/auto/KubeletFlags 0.22
285 TestNetworkPlugins/group/auto/NetCatPod 10.28
286 TestNetworkPlugins/group/auto/DNS 0.19
287 TestNetworkPlugins/group/auto/Localhost 0.14
288 TestNetworkPlugins/group/auto/HairPin 0.14
289 TestNetworkPlugins/group/kindnet/Start 61.14
290 TestNetworkPlugins/group/calico/Start 100.06
291 TestNetworkPlugins/group/custom-flannel/Start 115.97
292 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
293 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
294 TestNetworkPlugins/group/kindnet/NetCatPod 10.31
295 TestNetworkPlugins/group/kindnet/DNS 0.21
296 TestNetworkPlugins/group/kindnet/Localhost 0.18
297 TestNetworkPlugins/group/kindnet/HairPin 0.19
298 TestNetworkPlugins/group/enable-default-cni/Start 83.56
299 TestNetworkPlugins/group/calico/ControllerPod 6.01
300 TestNetworkPlugins/group/flannel/Start 88.14
301 TestNetworkPlugins/group/calico/KubeletFlags 0.21
302 TestNetworkPlugins/group/calico/NetCatPod 10.24
303 TestNetworkPlugins/group/calico/DNS 0.24
304 TestNetworkPlugins/group/calico/Localhost 0.14
305 TestNetworkPlugins/group/calico/HairPin 0.17
306 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
307 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.27
308 TestNetworkPlugins/group/custom-flannel/DNS 0.22
309 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
310 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
311 TestNetworkPlugins/group/bridge/Start 64.92
314 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.06
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
319 TestNetworkPlugins/group/flannel/ControllerPod 6.01
320 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
321 TestNetworkPlugins/group/flannel/NetCatPod 15.29
322 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
323 TestNetworkPlugins/group/bridge/NetCatPod 14.3
325 TestStartStop/group/no-preload/serial/FirstStart 100.47
326 TestNetworkPlugins/group/bridge/DNS 0.17
327 TestNetworkPlugins/group/bridge/Localhost 0.13
328 TestNetworkPlugins/group/bridge/HairPin 0.14
329 TestNetworkPlugins/group/flannel/DNS 0.17
330 TestNetworkPlugins/group/flannel/Localhost 0.14
331 TestNetworkPlugins/group/flannel/HairPin 0.12
333 TestStartStop/group/embed-certs/serial/FirstStart 64.5
335 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 118.42
336 TestStartStop/group/embed-certs/serial/DeployApp 12.32
337 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
338 TestStartStop/group/no-preload/serial/DeployApp 10.3
340 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
342 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.28
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
349 TestStartStop/group/embed-certs/serial/SecondStart 668.56
350 TestStartStop/group/no-preload/serial/SecondStart 574.17
352 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 537.17
353 TestStartStop/group/old-k8s-version/serial/Stop 6.3
354 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
365 TestStartStop/group/newest-cni/serial/FirstStart 46.73
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.02
368 TestStartStop/group/newest-cni/serial/Stop 10.51
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
370 TestStartStop/group/newest-cni/serial/SecondStart 35.52
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
374 TestStartStop/group/newest-cni/serial/Pause 2.28
x
+
TestDownloadOnly/v1.20.0/json-events (25.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-816611 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-816611 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.669505602s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0930 19:38:24.999806   14875 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0930 19:38:24.999891   14875 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-816611
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-816611: exit status 85 (57.429718ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-816611 | jenkins | v1.34.0 | 30 Sep 24 19:37 UTC |          |
	|         | -p download-only-816611        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 19:37:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 19:37:59.366695   14887 out.go:345] Setting OutFile to fd 1 ...
	I0930 19:37:59.366825   14887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:37:59.366854   14887 out.go:358] Setting ErrFile to fd 2...
	I0930 19:37:59.366880   14887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:37:59.367519   14887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	W0930 19:37:59.367697   14887 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19736-7672/.minikube/config/config.json: open /home/jenkins/minikube-integration/19736-7672/.minikube/config/config.json: no such file or directory
	I0930 19:37:59.368273   14887 out.go:352] Setting JSON to true
	I0930 19:37:59.369165   14887 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1222,"bootTime":1727723857,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 19:37:59.369266   14887 start.go:139] virtualization: kvm guest
	I0930 19:37:59.371612   14887 out.go:97] [download-only-816611] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0930 19:37:59.371744   14887 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball: no such file or directory
	I0930 19:37:59.371779   14887 notify.go:220] Checking for updates...
	I0930 19:37:59.373147   14887 out.go:169] MINIKUBE_LOCATION=19736
	I0930 19:37:59.374533   14887 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 19:37:59.375942   14887 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:37:59.377230   14887 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:37:59.378610   14887 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0930 19:37:59.381069   14887 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 19:37:59.381269   14887 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 19:37:59.487999   14887 out.go:97] Using the kvm2 driver based on user configuration
	I0930 19:37:59.488031   14887 start.go:297] selected driver: kvm2
	I0930 19:37:59.488038   14887 start.go:901] validating driver "kvm2" against <nil>
	I0930 19:37:59.488373   14887 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:37:59.488491   14887 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 19:37:59.503833   14887 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 19:37:59.503878   14887 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 19:37:59.504372   14887 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0930 19:37:59.504511   14887 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 19:37:59.504535   14887 cni.go:84] Creating CNI manager for ""
	I0930 19:37:59.504578   14887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 19:37:59.504585   14887 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 19:37:59.504646   14887 start.go:340] cluster config:
	{Name:download-only-816611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-816611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:37:59.504818   14887 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:37:59.507216   14887 out.go:97] Downloading VM boot image ...
	I0930 19:37:59.507261   14887 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19736-7672/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 19:38:08.682026   14887 out.go:97] Starting "download-only-816611" primary control-plane node in "download-only-816611" cluster
	I0930 19:38:08.682052   14887 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 19:38:08.779750   14887 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 19:38:08.779784   14887 cache.go:56] Caching tarball of preloaded images
	I0930 19:38:08.779970   14887 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 19:38:08.781791   14887 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0930 19:38:08.781835   14887 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0930 19:38:08.881265   14887 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 19:38:23.239787   14887 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0930 19:38:23.239883   14887 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0930 19:38:24.151664   14887 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0930 19:38:24.151997   14887 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/download-only-816611/config.json ...
	I0930 19:38:24.152026   14887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/download-only-816611/config.json: {Name:mkec7ae558901f4789182297ed0c631f25faab4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 19:38:24.152172   14887 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 19:38:24.152345   14887 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19736-7672/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-816611 host does not exist
	  To start a cluster, run: "minikube start -p download-only-816611"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-816611
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (12.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-153563 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-153563 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.469477726s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (12.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0930 19:38:37.794972   14875 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0930 19:38:37.795018   14875 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-153563
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-153563: exit status 85 (57.087742ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-816611 | jenkins | v1.34.0 | 30 Sep 24 19:37 UTC |                     |
	|         | -p download-only-816611        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| delete  | -p download-only-816611        | download-only-816611 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC | 30 Sep 24 19:38 UTC |
	| start   | -o=json --download-only        | download-only-153563 | jenkins | v1.34.0 | 30 Sep 24 19:38 UTC |                     |
	|         | -p download-only-153563        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 19:38:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 19:38:25.363108   15140 out.go:345] Setting OutFile to fd 1 ...
	I0930 19:38:25.363367   15140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:38:25.363376   15140 out.go:358] Setting ErrFile to fd 2...
	I0930 19:38:25.363381   15140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:38:25.363605   15140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 19:38:25.364214   15140 out.go:352] Setting JSON to true
	I0930 19:38:25.365085   15140 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1248,"bootTime":1727723857,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 19:38:25.365191   15140 start.go:139] virtualization: kvm guest
	I0930 19:38:25.367308   15140 out.go:97] [download-only-153563] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 19:38:25.367483   15140 notify.go:220] Checking for updates...
	I0930 19:38:25.368717   15140 out.go:169] MINIKUBE_LOCATION=19736
	I0930 19:38:25.370020   15140 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 19:38:25.371410   15140 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:38:25.373252   15140 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:38:25.374629   15140 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0930 19:38:25.377141   15140 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 19:38:25.377412   15140 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 19:38:25.410274   15140 out.go:97] Using the kvm2 driver based on user configuration
	I0930 19:38:25.410305   15140 start.go:297] selected driver: kvm2
	I0930 19:38:25.410312   15140 start.go:901] validating driver "kvm2" against <nil>
	I0930 19:38:25.410663   15140 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:38:25.410746   15140 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-7672/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 19:38:25.426562   15140 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 19:38:25.426607   15140 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 19:38:25.427095   15140 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0930 19:38:25.427238   15140 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 19:38:25.427264   15140 cni.go:84] Creating CNI manager for ""
	I0930 19:38:25.427311   15140 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 19:38:25.427322   15140 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 19:38:25.427368   15140 start.go:340] cluster config:
	{Name:download-only-153563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-153563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:38:25.427462   15140 iso.go:125] acquiring lock: {Name:mkd089f21f4d5306d5af5defa4101731e6e67391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 19:38:25.429353   15140 out.go:97] Starting "download-only-153563" primary control-plane node in "download-only-153563" cluster
	I0930 19:38:25.429380   15140 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:38:25.931763   15140 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 19:38:25.931806   15140 cache.go:56] Caching tarball of preloaded images
	I0930 19:38:25.932005   15140 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 19:38:25.933810   15140 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0930 19:38:25.933841   15140 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0930 19:38:26.038335   15140 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19736-7672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-153563 host does not exist
	  To start a cluster, run: "minikube start -p download-only-153563"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-153563
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0930 19:38:38.360333   14875 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-728092 --alsologtostderr --binary-mirror http://127.0.0.1:33837 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-728092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-728092
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (58.78s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-579164 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-579164 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (57.317826503s)
helpers_test.go:175: Cleaning up "offline-crio-579164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-579164
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-579164: (1.458514396s)
--- PASS: TestOffline (58.78s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-857381
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-857381: exit status 85 (48.565272ms)

                                                
                                                
-- stdout --
	* Profile "addons-857381" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-857381"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-857381
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-857381: exit status 85 (47.668669ms)

                                                
                                                
-- stdout --
	* Profile "addons-857381" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-857381"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (133.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-857381 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-857381 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m13.100805827s)
--- PASS: TestAddons/Setup (133.10s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (3.02s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-857381 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-857381 get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context addons-857381 get secret gcp-auth -n new-namespace: exit status 1 (88.633851ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context addons-857381 logs -l app=gcp-auth -n gcp-auth
I0930 19:40:52.363446   14875 retry.go:31] will retry after 2.703291004s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2024/09/30 19:40:51 GCP Auth Webhook started!
	2024/09/30 19:40:52 Ready to marshal response ...
	2024/09/30 19:40:52 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:608: (dbg) Run:  kubectl --context addons-857381 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (3.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-b7zfb" [a81ace8b-9df8-4d4c-971d-e7fdaf31b9fe] Running
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004636245s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-857381
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-857381: (5.824457501s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0930 19:48:57.820011   14875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 8.314062ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-857381 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-857381 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e10ca836-909f-423f-b82e-9d88c8a1261d] Pending
helpers_test.go:344: "task-pv-pod" [e10ca836-909f-423f-b82e-9d88c8a1261d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e10ca836-909f-423f-b82e-9d88c8a1261d] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.027195488s
addons_test.go:528: (dbg) Run:  kubectl --context addons-857381 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-857381 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-857381 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-857381 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-857381 delete pod task-pv-pod: (1.344073636s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-857381 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-857381 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-857381 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a4bde13a-8db4-4435-80e1-a460364fc731] Pending
helpers_test.go:344: "task-pv-pod-restore" [a4bde13a-8db4-4435-80e1-a460364fc731] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a4bde13a-8db4-4435-80e1-a460364fc731] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003926223s
addons_test.go:570: (dbg) Run:  kubectl --context addons-857381 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-857381 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-857381 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-857381 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.2869572s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (58.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-857381 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-d6fqf" [b9ca43d3-a140-4a40-ac6e-db687e6b0e8c] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-d6fqf" [b9ca43d3-a140-4a40-ac6e-db687e6b0e8c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-d6fqf" [b9ca43d3-a140-4a40-ac6e-db687e6b0e8c] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004640053s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-857381 addons disable headlamp --alsologtostderr -v=1: (6.379858589s)
--- PASS: TestAddons/parallel/Headlamp (20.33s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-jkshw" [5a325854-f886-42f9-bc32-9ec2207ea5cf] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004592246s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-857381
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.22s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-857381 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-857381 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-857381 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cea76bb6-9c73-43ae-8a4b-9e2ae12f5ae0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cea76bb6-9c73-43ae-8a4b-9e2ae12f5ae0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [cea76bb6-9c73-43ae-8a4b-9e2ae12f5ae0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.007117004s
addons_test.go:938: (dbg) Run:  kubectl --context addons-857381 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 ssh "cat /opt/local-path-provisioner/pvc-2b406b11-e501-447a-83ed-ef44d83e41ee_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-857381 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-857381 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-857381 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.365963053s)
--- PASS: TestAddons/parallel/LocalPath (56.22s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9vf5l" [f2848172-eec4-47cc-9e9d-36026e22b55c] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004235364s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-857381
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-zmnz7" [386f10c0-d375-4937-9e10-607a55aecd31] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004797608s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-857381 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-857381 addons disable yakd --alsologtostderr -v=1: (5.843381682s)
--- PASS: TestAddons/parallel/Yakd (11.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.56s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-857381
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-857381: (7.28956124s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-857381
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-857381
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-857381
--- PASS: TestAddons/StoppedEnableDisable (7.56s)

                                                
                                    
x
+
TestCertOptions (46.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-280515 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-280515 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (45.099769382s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-280515 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-280515 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-280515 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-280515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-280515
--- PASS: TestCertOptions (46.38s)

                                                
                                    
x
+
TestCertExpiration (291.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-988243 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-988243 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m5.434130351s)
E0930 20:53:12.003639   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-988243 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0930 20:55:55.311070   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-988243 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (45.464383154s)
helpers_test.go:175: Cleaning up "cert-expiration-988243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-988243
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-988243: (1.057524042s)
--- PASS: TestCertExpiration (291.96s)

                                                
                                    
x
+
TestForceSystemdFlag (47.21s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-188130 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-188130 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.219827765s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-188130 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-188130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-188130
--- PASS: TestForceSystemdFlag (47.21s)

                                                
                                    
x
+
TestForceSystemdEnv (71.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-618322 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-618322 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.972296516s)
helpers_test.go:175: Cleaning up "force-systemd-env-618322" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-618322
--- PASS: TestForceSystemdEnv (71.94s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.43s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0930 20:50:53.314788   14875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0930 20:50:53.314925   14875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0930 20:50:53.347016   14875 install.go:62] docker-machine-driver-kvm2: exit status 1
W0930 20:50:53.347320   14875 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0930 20:50:53.347421   14875 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1716559435/001/docker-machine-driver-kvm2
I0930 20:50:53.567893   14875 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1716559435/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640] Decompressors:map[bz2:0xc00046f7d0 gz:0xc00046f7d8 tar:0xc00046f6d0 tar.bz2:0xc00046f700 tar.gz:0xc00046f710 tar.xz:0xc00046f730 tar.zst:0xc00046f790 tbz2:0xc00046f700 tgz:0xc00046f710 txz:0xc00046f730 tzst:0xc00046f790 xz:0xc00046f7e0 zip:0xc00046f840 zst:0xc00046f7e8] Getters:map[file:0xc0019143c0 http:0xc0008101e0 https:0xc000810230] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0930 20:50:53.567940   14875 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1716559435/001/docker-machine-driver-kvm2
E0930 20:50:55.311123   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestKVMDriverInstallOrUpdate (4.43s)

                                                
                                    
x
+
TestErrorSpam/setup (43.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-480085 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-480085 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-480085 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-480085 --driver=kvm2  --container-runtime=crio: (43.385868014s)
--- PASS: TestErrorSpam/setup (43.39s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (5.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 stop: (1.609146187s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 stop: (1.825538048s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-480085 --log_dir /tmp/nospam-480085 stop: (1.669586099s)
--- PASS: TestErrorSpam/stop (5.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19736-7672/.minikube/files/etc/test/nested/copy/14875/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-750630 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0930 19:55:55.311112   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 19:55:55.317538   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 19:55:55.328943   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 19:55:55.350391   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 19:55:55.391767   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 19:55:55.473328   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 19:55:55.634926   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 19:55:55.956746   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 19:55:56.598828   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 19:55:57.880422   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 19:56:00.442122   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 19:56:05.564302   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 19:56:15.805741   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 19:56:36.287262   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-750630 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m21.320327218s)
--- PASS: TestFunctional/serial/StartWithProxy (81.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.23s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0930 19:56:52.070105   14875 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-750630 --alsologtostderr -v=8
E0930 19:57:17.249966   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-750630 --alsologtostderr -v=8: (45.232375833s)
functional_test.go:663: soft start took 45.232982068s for "functional-750630" cluster.
I0930 19:57:37.302835   14875 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (45.23s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-750630 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-750630 cache add registry.k8s.io/pause:3.1: (1.365040834s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-750630 cache add registry.k8s.io/pause:3.3: (1.39361384s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-750630 cache add registry.k8s.io/pause:latest: (1.296132147s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-750630 /tmp/TestFunctionalserialCacheCmdcacheadd_local919031766/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 cache add minikube-local-cache-test:functional-750630
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-750630 cache add minikube-local-cache-test:functional-750630: (1.805803245s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 cache delete minikube-local-cache-test:functional-750630
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-750630
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750630 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.589805ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-750630 cache reload: (1.019640019s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 kubectl -- --context functional-750630 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-750630 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.7s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-750630 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-750630 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.696552934s)
functional_test.go:761: restart took 34.6966988s for "functional-750630" cluster.
I0930 19:58:20.695505   14875 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (34.70s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-750630 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-750630 logs: (1.483673263s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 logs --file /tmp/TestFunctionalserialLogsFileCmd4231302310/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-750630 logs --file /tmp/TestFunctionalserialLogsFileCmd4231302310/001/logs.txt: (1.365488199s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.55s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-750630 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-750630
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-750630: exit status 115 (293.462916ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.202:31134 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-750630 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-750630 delete -f testdata/invalidsvc.yaml: (1.049620543s)
--- PASS: TestFunctional/serial/InvalidService (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750630 config get cpus: exit status 14 (42.944463ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750630 config get cpus: exit status 14 (44.8279ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-750630 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-750630 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 25460: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.88s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-750630 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-750630 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.81162ms)

                                                
                                                
-- stdout --
	* [functional-750630] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 19:58:54.855518   25164 out.go:345] Setting OutFile to fd 1 ...
	I0930 19:58:54.855664   25164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:58:54.855675   25164 out.go:358] Setting ErrFile to fd 2...
	I0930 19:58:54.855680   25164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:58:54.855879   25164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 19:58:54.856431   25164 out.go:352] Setting JSON to false
	I0930 19:58:54.857354   25164 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2478,"bootTime":1727723857,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 19:58:54.857449   25164 start.go:139] virtualization: kvm guest
	I0930 19:58:54.860041   25164 out.go:177] * [functional-750630] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 19:58:54.862061   25164 notify.go:220] Checking for updates...
	I0930 19:58:54.862112   25164 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 19:58:54.863657   25164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 19:58:54.865080   25164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:58:54.866633   25164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:58:54.867994   25164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 19:58:54.869327   25164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 19:58:54.871553   25164 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:58:54.872176   25164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:58:54.872233   25164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:58:54.887895   25164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39069
	I0930 19:58:54.888404   25164 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:58:54.888998   25164 main.go:141] libmachine: Using API Version  1
	I0930 19:58:54.889024   25164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:58:54.889404   25164 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:58:54.889616   25164 main.go:141] libmachine: (functional-750630) Calling .DriverName
	I0930 19:58:54.889863   25164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 19:58:54.890156   25164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:58:54.890191   25164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:58:54.906988   25164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44515
	I0930 19:58:54.907476   25164 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:58:54.907975   25164 main.go:141] libmachine: Using API Version  1
	I0930 19:58:54.908002   25164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:58:54.908475   25164 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:58:54.908761   25164 main.go:141] libmachine: (functional-750630) Calling .DriverName
	I0930 19:58:54.946743   25164 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 19:58:54.948424   25164 start.go:297] selected driver: kvm2
	I0930 19:58:54.948442   25164 start.go:901] validating driver "kvm2" against &{Name:functional-750630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-750630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:58:54.948590   25164 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 19:58:54.951197   25164 out.go:201] 
	W0930 19:58:54.952643   25164 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0930 19:58:54.954069   25164 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-750630 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-750630 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-750630 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (149.070912ms)

                                                
                                                
-- stdout --
	* [functional-750630] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 19:58:54.717905   25131 out.go:345] Setting OutFile to fd 1 ...
	I0930 19:58:54.718034   25131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:58:54.718041   25131 out.go:358] Setting ErrFile to fd 2...
	I0930 19:58:54.718047   25131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 19:58:54.718331   25131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 19:58:54.718877   25131 out.go:352] Setting JSON to false
	I0930 19:58:54.719807   25131 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2478,"bootTime":1727723857,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 19:58:54.719904   25131 start.go:139] virtualization: kvm guest
	I0930 19:58:54.722107   25131 out.go:177] * [functional-750630] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0930 19:58:54.723557   25131 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 19:58:54.723551   25131 notify.go:220] Checking for updates...
	I0930 19:58:54.725236   25131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 19:58:54.726785   25131 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 19:58:54.728214   25131 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 19:58:54.729574   25131 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 19:58:54.731131   25131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 19:58:54.733433   25131 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 19:58:54.734039   25131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:58:54.734120   25131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:58:54.749368   25131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
	I0930 19:58:54.749783   25131 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:58:54.750363   25131 main.go:141] libmachine: Using API Version  1
	I0930 19:58:54.750379   25131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:58:54.750798   25131 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:58:54.751005   25131 main.go:141] libmachine: (functional-750630) Calling .DriverName
	I0930 19:58:54.751236   25131 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 19:58:54.751614   25131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 19:58:54.751652   25131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 19:58:54.766930   25131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42241
	I0930 19:58:54.767368   25131 main.go:141] libmachine: () Calling .GetVersion
	I0930 19:58:54.767939   25131 main.go:141] libmachine: Using API Version  1
	I0930 19:58:54.767966   25131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 19:58:54.768274   25131 main.go:141] libmachine: () Calling .GetMachineName
	I0930 19:58:54.768471   25131 main.go:141] libmachine: (functional-750630) Calling .DriverName
	I0930 19:58:54.804336   25131 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0930 19:58:54.806253   25131 start.go:297] selected driver: kvm2
	I0930 19:58:54.806270   25131 start.go:901] validating driver "kvm2" against &{Name:functional-750630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-750630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 19:58:54.806385   25131 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 19:58:54.808801   25131 out.go:201] 
	W0930 19:58:54.810336   25131 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0930 19:58:54.811974   25131 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (23.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-750630 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-750630 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bb2kf" [d9a6e967-a3a8-4181-8a4a-114c1fb98128] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bb2kf" [d9a6e967-a3a8-4181-8a4a-114c1fb98128] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 23.00578867s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.202:30787
functional_test.go:1675: http://192.168.39.202:30787: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-bb2kf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.202:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.202:30787
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (23.47s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [76de6dbe-0584-4ec4-bb40-763c237ae467] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004194126s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-750630 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-750630 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-750630 get pvc myclaim -o=json
I0930 19:58:35.644632   14875 retry.go:31] will retry after 1.130892631s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:16688c47-5953-4e21-a9d2-914edc2fd495 ResourceVersion:729 Generation:0 CreationTimestamp:2024-09-30 19:58:35 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0014e0be0 VolumeMode:0xc0014e0c00 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-750630 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-750630 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2db56735-aade-4d71-bdc8-5791a67b8e26] Pending
helpers_test.go:344: "sp-pod" [2db56735-aade-4d71-bdc8-5791a67b8e26] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0930 19:58:39.172209   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [2db56735-aade-4d71-bdc8-5791a67b8e26] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004129519s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-750630 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-750630 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-750630 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cf0f2f7b-ff10-49bc-a487-7682e7c9d42d] Pending
helpers_test.go:344: "sp-pod" [cf0f2f7b-ff10-49bc-a487-7682e7c9d42d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cf0f2f7b-ff10-49bc-a487-7682e7c9d42d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.00466091s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-750630 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh -n functional-750630 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 cp functional-750630:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1125357414/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh -n functional-750630 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh -n functional-750630 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-750630 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-c9zzx" [a7eea0c4-761b-4b97-af25-aac8afdf37a2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-c9zzx" [a7eea0c4-761b-4b97-af25-aac8afdf37a2] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.004701994s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-750630 exec mysql-6cdb49bbb-c9zzx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-750630 exec mysql-6cdb49bbb-c9zzx -- mysql -ppassword -e "show databases;": exit status 1 (153.55435ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0930 19:58:49.094208   14875 retry.go:31] will retry after 960.626795ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-750630 exec mysql-6cdb49bbb-c9zzx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-750630 exec mysql-6cdb49bbb-c9zzx -- mysql -ppassword -e "show databases;": exit status 1 (427.072656ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0930 19:58:50.482867   14875 retry.go:31] will retry after 1.020922462s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-750630 exec mysql-6cdb49bbb-c9zzx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.97s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/14875/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "sudo cat /etc/test/nested/copy/14875/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/14875.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "sudo cat /etc/ssl/certs/14875.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/14875.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "sudo cat /usr/share/ca-certificates/14875.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/148752.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "sudo cat /etc/ssl/certs/148752.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/148752.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "sudo cat /usr/share/ca-certificates/148752.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-750630 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750630 ssh "sudo systemctl is-active docker": exit status 1 (223.3952ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750630 ssh "sudo systemctl is-active containerd": exit status 1 (209.041673ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-750630 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-750630
localhost/kicbase/echo-server:functional-750630
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-750630 image ls --format short --alsologtostderr:
I0930 19:58:56.788914   25480 out.go:345] Setting OutFile to fd 1 ...
I0930 19:58:56.789183   25480 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 19:58:56.789193   25480 out.go:358] Setting ErrFile to fd 2...
I0930 19:58:56.789198   25480 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 19:58:56.789386   25480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
I0930 19:58:56.789967   25480 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 19:58:56.790060   25480 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 19:58:56.790421   25480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 19:58:56.790470   25480 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 19:58:56.806350   25480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44841
I0930 19:58:56.806713   25480 main.go:141] libmachine: () Calling .GetVersion
I0930 19:58:56.807285   25480 main.go:141] libmachine: Using API Version  1
I0930 19:58:56.807318   25480 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 19:58:56.807721   25480 main.go:141] libmachine: () Calling .GetMachineName
I0930 19:58:56.807880   25480 main.go:141] libmachine: (functional-750630) Calling .GetState
I0930 19:58:56.809816   25480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 19:58:56.809860   25480 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 19:58:56.826777   25480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34469
I0930 19:58:56.827408   25480 main.go:141] libmachine: () Calling .GetVersion
I0930 19:58:56.828019   25480 main.go:141] libmachine: Using API Version  1
I0930 19:58:56.828062   25480 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 19:58:56.828486   25480 main.go:141] libmachine: () Calling .GetMachineName
I0930 19:58:56.828667   25480 main.go:141] libmachine: (functional-750630) Calling .DriverName
I0930 19:58:56.828918   25480 ssh_runner.go:195] Run: systemctl --version
I0930 19:58:56.828948   25480 main.go:141] libmachine: (functional-750630) Calling .GetSSHHostname
I0930 19:58:56.832545   25480 main.go:141] libmachine: (functional-750630) DBG | domain functional-750630 has defined MAC address 52:54:00:99:2a:57 in network mk-functional-750630
I0930 19:58:56.832994   25480 main.go:141] libmachine: (functional-750630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:2a:57", ip: ""} in network mk-functional-750630: {Iface:virbr1 ExpiryTime:2024-09-30 20:55:44 +0000 UTC Type:0 Mac:52:54:00:99:2a:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:functional-750630 Clientid:01:52:54:00:99:2a:57}
I0930 19:58:56.833028   25480 main.go:141] libmachine: (functional-750630) DBG | domain functional-750630 has defined IP address 192.168.39.202 and MAC address 52:54:00:99:2a:57 in network mk-functional-750630
I0930 19:58:56.833206   25480 main.go:141] libmachine: (functional-750630) Calling .GetSSHPort
I0930 19:58:56.833394   25480 main.go:141] libmachine: (functional-750630) Calling .GetSSHKeyPath
I0930 19:58:56.833554   25480 main.go:141] libmachine: (functional-750630) Calling .GetSSHUsername
I0930 19:58:56.833727   25480 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/functional-750630/id_rsa Username:docker}
I0930 19:58:56.940050   25480 ssh_runner.go:195] Run: sudo crictl images --output json
I0930 19:58:57.017463   25480 main.go:141] libmachine: Making call to close driver server
I0930 19:58:57.017480   25480 main.go:141] libmachine: (functional-750630) Calling .Close
I0930 19:58:57.017867   25480 main.go:141] libmachine: Successfully made call to close driver server
I0930 19:58:57.017885   25480 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 19:58:57.017895   25480 main.go:141] libmachine: Making call to close driver server
I0930 19:58:57.017903   25480 main.go:141] libmachine: (functional-750630) Calling .Close
I0930 19:58:57.018132   25480 main.go:141] libmachine: Successfully made call to close driver server
I0930 19:58:57.018146   25480 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 19:58:57.018171   25480 main.go:141] libmachine: (functional-750630) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-750630 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-750630  | ffb366838c191 | 1.47MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-750630  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-750630  | db877e3eea218 | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 9527c0f683c3b | 192MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-750630 image ls --format table --alsologtostderr:
I0930 19:59:02.293331   25865 out.go:345] Setting OutFile to fd 1 ...
I0930 19:59:02.293605   25865 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 19:59:02.293615   25865 out.go:358] Setting ErrFile to fd 2...
I0930 19:59:02.293620   25865 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 19:59:02.293827   25865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
I0930 19:59:02.294404   25865 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 19:59:02.294507   25865 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 19:59:02.294868   25865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 19:59:02.294912   25865 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 19:59:02.310684   25865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36743
I0930 19:59:02.311227   25865 main.go:141] libmachine: () Calling .GetVersion
I0930 19:59:02.311869   25865 main.go:141] libmachine: Using API Version  1
I0930 19:59:02.311893   25865 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 19:59:02.312355   25865 main.go:141] libmachine: () Calling .GetMachineName
I0930 19:59:02.312624   25865 main.go:141] libmachine: (functional-750630) Calling .GetState
I0930 19:59:02.314996   25865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 19:59:02.315055   25865 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 19:59:02.331088   25865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
I0930 19:59:02.331652   25865 main.go:141] libmachine: () Calling .GetVersion
I0930 19:59:02.332247   25865 main.go:141] libmachine: Using API Version  1
I0930 19:59:02.332277   25865 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 19:59:02.332651   25865 main.go:141] libmachine: () Calling .GetMachineName
I0930 19:59:02.332832   25865 main.go:141] libmachine: (functional-750630) Calling .DriverName
I0930 19:59:02.333072   25865 ssh_runner.go:195] Run: systemctl --version
I0930 19:59:02.333111   25865 main.go:141] libmachine: (functional-750630) Calling .GetSSHHostname
I0930 19:59:02.336932   25865 main.go:141] libmachine: (functional-750630) DBG | domain functional-750630 has defined MAC address 52:54:00:99:2a:57 in network mk-functional-750630
I0930 19:59:02.337384   25865 main.go:141] libmachine: (functional-750630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:2a:57", ip: ""} in network mk-functional-750630: {Iface:virbr1 ExpiryTime:2024-09-30 20:55:44 +0000 UTC Type:0 Mac:52:54:00:99:2a:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:functional-750630 Clientid:01:52:54:00:99:2a:57}
I0930 19:59:02.337418   25865 main.go:141] libmachine: (functional-750630) DBG | domain functional-750630 has defined IP address 192.168.39.202 and MAC address 52:54:00:99:2a:57 in network mk-functional-750630
I0930 19:59:02.337622   25865 main.go:141] libmachine: (functional-750630) Calling .GetSSHPort
I0930 19:59:02.337915   25865 main.go:141] libmachine: (functional-750630) Calling .GetSSHKeyPath
I0930 19:59:02.338147   25865 main.go:141] libmachine: (functional-750630) Calling .GetSSHUsername
I0930 19:59:02.338330   25865 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/functional-750630/id_rsa Username:docker}
I0930 19:59:02.464981   25865 ssh_runner.go:195] Run: sudo crictl images --output json
I0930 19:59:02.548901   25865 main.go:141] libmachine: Making call to close driver server
I0930 19:59:02.548925   25865 main.go:141] libmachine: (functional-750630) Calling .Close
I0930 19:59:02.549328   25865 main.go:141] libmachine: Successfully made call to close driver server
I0930 19:59:02.549331   25865 main.go:141] libmachine: (functional-750630) DBG | Closing plugin on server side
I0930 19:59:02.549346   25865 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 19:59:02.549355   25865 main.go:141] libmachine: Making call to close driver server
I0930 19:59:02.549362   25865 main.go:141] libmachine: (functional-750630) Calling .Close
I0930 19:59:02.549607   25865 main.go:141] libmachine: Successfully made call to close driver server
I0930 19:59:02.549627   25865 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-750630 image ls --format json --alsologtostderr:
[{"id":"ffb366838c191948b5e0d1fb67b28d7a18013f25cc273f68d571bedcab0f393b","repoDigests":["localhost/my-image@sha256:c5003a793c045d86cd29c83afc885c1f685c0928ec4ad71edc6eaf784aa1647a"],"repoTags":["localhost/my-image:functional-750630"],"size":"1468600"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b9
2effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"f4e55d8dd30a54021da30f3b2690dc1b0c62b3fc78ffcd1e256488cde3dda10e","repoDigests":["docker.io/library/d891719e2e56ae61732fc2dbe0fb40a6aed1b618bbc0b3783d0d61f6b99a37ed-tmp@sha256:e3fcba70d9300293377014b3aa735dff0a96226c50c7b53e08742f5b1608e7cf"],"repoTags":[],"size":"1466018"},{"id":"db877e3eea218c3ba27f72f61d6a40dcdc5aea10e8b871f3c4b4e801ea9afcac","repoDigests":["localhost/minikube-local-cache-test@sha256:eb0fd075a8da7e213bd6ec5604d9f176724f5148b2ce52f2edfcdde365439658"],"repoTags":["localhost/minikube-local-cache-test:functional-750630"],"size":"3330"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f10
4f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-750630"],"size":"4943877"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61
fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc19351790
5e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631
262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"9527c0f683c3b2f0465019f9f5456f01a0fc0d4d274466831b9910a21d0302cd","repoDigests":["docker.io/library/nginx@sha256:10b61fc3d8262c8bf44c89aef3d81202ce12b8cba12fff2e32ca5978a2d88c2b","docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853881"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
,"repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-750630 image ls --format json --alsologtostderr:
I0930 19:59:01.931758   25812 out.go:345] Setting OutFile to fd 1 ...
I0930 19:59:01.932076   25812 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 19:59:01.932088   25812 out.go:358] Setting ErrFile to fd 2...
I0930 19:59:01.932095   25812 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 19:59:01.932391   25812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
I0930 19:59:01.933263   25812 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 19:59:01.933419   25812 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 19:59:01.934020   25812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 19:59:01.934082   25812 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 19:59:01.950949   25812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
I0930 19:59:01.951570   25812 main.go:141] libmachine: () Calling .GetVersion
I0930 19:59:01.952218   25812 main.go:141] libmachine: Using API Version  1
I0930 19:59:01.952246   25812 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 19:59:01.952590   25812 main.go:141] libmachine: () Calling .GetMachineName
I0930 19:59:01.952828   25812 main.go:141] libmachine: (functional-750630) Calling .GetState
I0930 19:59:01.955602   25812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 19:59:01.955670   25812 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 19:59:01.971652   25812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
I0930 19:59:01.972226   25812 main.go:141] libmachine: () Calling .GetVersion
I0930 19:59:01.972908   25812 main.go:141] libmachine: Using API Version  1
I0930 19:59:01.972942   25812 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 19:59:01.973359   25812 main.go:141] libmachine: () Calling .GetMachineName
I0930 19:59:01.973613   25812 main.go:141] libmachine: (functional-750630) Calling .DriverName
I0930 19:59:01.973823   25812 ssh_runner.go:195] Run: systemctl --version
I0930 19:59:01.973857   25812 main.go:141] libmachine: (functional-750630) Calling .GetSSHHostname
I0930 19:59:01.977262   25812 main.go:141] libmachine: (functional-750630) DBG | domain functional-750630 has defined MAC address 52:54:00:99:2a:57 in network mk-functional-750630
I0930 19:59:01.977759   25812 main.go:141] libmachine: (functional-750630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:2a:57", ip: ""} in network mk-functional-750630: {Iface:virbr1 ExpiryTime:2024-09-30 20:55:44 +0000 UTC Type:0 Mac:52:54:00:99:2a:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:functional-750630 Clientid:01:52:54:00:99:2a:57}
I0930 19:59:01.977795   25812 main.go:141] libmachine: (functional-750630) DBG | domain functional-750630 has defined IP address 192.168.39.202 and MAC address 52:54:00:99:2a:57 in network mk-functional-750630
I0930 19:59:01.977845   25812 main.go:141] libmachine: (functional-750630) Calling .GetSSHPort
I0930 19:59:01.978064   25812 main.go:141] libmachine: (functional-750630) Calling .GetSSHKeyPath
I0930 19:59:01.978406   25812 main.go:141] libmachine: (functional-750630) Calling .GetSSHUsername
I0930 19:59:01.978639   25812 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/functional-750630/id_rsa Username:docker}
I0930 19:59:02.132872   25812 ssh_runner.go:195] Run: sudo crictl images --output json
I0930 19:59:02.234295   25812 main.go:141] libmachine: Making call to close driver server
I0930 19:59:02.234313   25812 main.go:141] libmachine: (functional-750630) Calling .Close
I0930 19:59:02.234568   25812 main.go:141] libmachine: Successfully made call to close driver server
I0930 19:59:02.234585   25812 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 19:59:02.234594   25812 main.go:141] libmachine: Making call to close driver server
I0930 19:59:02.234603   25812 main.go:141] libmachine: (functional-750630) Calling .Close
I0930 19:59:02.234812   25812 main.go:141] libmachine: (functional-750630) DBG | Closing plugin on server side
I0930 19:59:02.234860   25812 main.go:141] libmachine: Successfully made call to close driver server
I0930 19:59:02.234873   25812 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-750630 image ls --format yaml --alsologtostderr:
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: db877e3eea218c3ba27f72f61d6a40dcdc5aea10e8b871f3c4b4e801ea9afcac
repoDigests:
- localhost/minikube-local-cache-test@sha256:eb0fd075a8da7e213bd6ec5604d9f176724f5148b2ce52f2edfcdde365439658
repoTags:
- localhost/minikube-local-cache-test:functional-750630
size: "3330"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-750630
size: "4943877"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 9527c0f683c3b2f0465019f9f5456f01a0fc0d4d274466831b9910a21d0302cd
repoDigests:
- docker.io/library/nginx@sha256:10b61fc3d8262c8bf44c89aef3d81202ce12b8cba12fff2e32ca5978a2d88c2b
- docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb
repoTags:
- docker.io/library/nginx:latest
size: "191853881"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-750630 image ls --format yaml --alsologtostderr:
I0930 19:58:57.074609   25504 out.go:345] Setting OutFile to fd 1 ...
I0930 19:58:57.074745   25504 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 19:58:57.074757   25504 out.go:358] Setting ErrFile to fd 2...
I0930 19:58:57.074764   25504 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 19:58:57.075041   25504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
I0930 19:58:57.075931   25504 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 19:58:57.076097   25504 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 19:58:57.076686   25504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 19:58:57.076754   25504 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 19:58:57.093071   25504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33257
I0930 19:58:57.093820   25504 main.go:141] libmachine: () Calling .GetVersion
I0930 19:58:57.094522   25504 main.go:141] libmachine: Using API Version  1
I0930 19:58:57.094551   25504 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 19:58:57.094924   25504 main.go:141] libmachine: () Calling .GetMachineName
I0930 19:58:57.095165   25504 main.go:141] libmachine: (functional-750630) Calling .GetState
I0930 19:58:57.097800   25504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 19:58:57.097844   25504 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 19:58:57.113709   25504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46071
I0930 19:58:57.114261   25504 main.go:141] libmachine: () Calling .GetVersion
I0930 19:58:57.114803   25504 main.go:141] libmachine: Using API Version  1
I0930 19:58:57.114827   25504 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 19:58:57.115252   25504 main.go:141] libmachine: () Calling .GetMachineName
I0930 19:58:57.115458   25504 main.go:141] libmachine: (functional-750630) Calling .DriverName
I0930 19:58:57.115717   25504 ssh_runner.go:195] Run: systemctl --version
I0930 19:58:57.115762   25504 main.go:141] libmachine: (functional-750630) Calling .GetSSHHostname
I0930 19:58:57.119113   25504 main.go:141] libmachine: (functional-750630) DBG | domain functional-750630 has defined MAC address 52:54:00:99:2a:57 in network mk-functional-750630
I0930 19:58:57.119581   25504 main.go:141] libmachine: (functional-750630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:2a:57", ip: ""} in network mk-functional-750630: {Iface:virbr1 ExpiryTime:2024-09-30 20:55:44 +0000 UTC Type:0 Mac:52:54:00:99:2a:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:functional-750630 Clientid:01:52:54:00:99:2a:57}
I0930 19:58:57.119629   25504 main.go:141] libmachine: (functional-750630) DBG | domain functional-750630 has defined IP address 192.168.39.202 and MAC address 52:54:00:99:2a:57 in network mk-functional-750630
I0930 19:58:57.119754   25504 main.go:141] libmachine: (functional-750630) Calling .GetSSHPort
I0930 19:58:57.119933   25504 main.go:141] libmachine: (functional-750630) Calling .GetSSHKeyPath
I0930 19:58:57.120102   25504 main.go:141] libmachine: (functional-750630) Calling .GetSSHUsername
I0930 19:58:57.120235   25504 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/functional-750630/id_rsa Username:docker}
I0930 19:58:57.213608   25504 ssh_runner.go:195] Run: sudo crictl images --output json
I0930 19:58:57.253664   25504 main.go:141] libmachine: Making call to close driver server
I0930 19:58:57.253681   25504 main.go:141] libmachine: (functional-750630) Calling .Close
I0930 19:58:57.253965   25504 main.go:141] libmachine: Successfully made call to close driver server
I0930 19:58:57.253978   25504 main.go:141] libmachine: (functional-750630) DBG | Closing plugin on server side
I0930 19:58:57.253984   25504 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 19:58:57.254012   25504 main.go:141] libmachine: Making call to close driver server
I0930 19:58:57.254020   25504 main.go:141] libmachine: (functional-750630) Calling .Close
I0930 19:58:57.254240   25504 main.go:141] libmachine: Successfully made call to close driver server
I0930 19:58:57.254257   25504 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 19:58:57.254333   25504 main.go:141] libmachine: (functional-750630) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750630 ssh pgrep buildkitd: exit status 1 (192.083331ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image build -t localhost/my-image:functional-750630 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-750630 image build -t localhost/my-image:functional-750630 testdata/build --alsologtostderr: (4.089348147s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-750630 image build -t localhost/my-image:functional-750630 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f4e55d8dd30
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-750630
--> ffb366838c1
Successfully tagged localhost/my-image:functional-750630
ffb366838c191948b5e0d1fb67b28d7a18013f25cc273f68d571bedcab0f393b
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-750630 image build -t localhost/my-image:functional-750630 testdata/build --alsologtostderr:
I0930 19:58:57.491838   25558 out.go:345] Setting OutFile to fd 1 ...
I0930 19:58:57.491979   25558 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 19:58:57.491988   25558 out.go:358] Setting ErrFile to fd 2...
I0930 19:58:57.491993   25558 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 19:58:57.492159   25558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
I0930 19:58:57.492711   25558 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 19:58:57.493241   25558 config.go:182] Loaded profile config "functional-750630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 19:58:57.493676   25558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 19:58:57.493720   25558 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 19:58:57.509361   25558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41913
I0930 19:58:57.509817   25558 main.go:141] libmachine: () Calling .GetVersion
I0930 19:58:57.510347   25558 main.go:141] libmachine: Using API Version  1
I0930 19:58:57.510366   25558 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 19:58:57.510788   25558 main.go:141] libmachine: () Calling .GetMachineName
I0930 19:58:57.511024   25558 main.go:141] libmachine: (functional-750630) Calling .GetState
I0930 19:58:57.513075   25558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 19:58:57.513162   25558 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 19:58:57.528244   25558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43587
I0930 19:58:57.528655   25558 main.go:141] libmachine: () Calling .GetVersion
I0930 19:58:57.529129   25558 main.go:141] libmachine: Using API Version  1
I0930 19:58:57.529150   25558 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 19:58:57.529622   25558 main.go:141] libmachine: () Calling .GetMachineName
I0930 19:58:57.529851   25558 main.go:141] libmachine: (functional-750630) Calling .DriverName
I0930 19:58:57.530053   25558 ssh_runner.go:195] Run: systemctl --version
I0930 19:58:57.530074   25558 main.go:141] libmachine: (functional-750630) Calling .GetSSHHostname
I0930 19:58:57.533224   25558 main.go:141] libmachine: (functional-750630) DBG | domain functional-750630 has defined MAC address 52:54:00:99:2a:57 in network mk-functional-750630
I0930 19:58:57.533655   25558 main.go:141] libmachine: (functional-750630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:2a:57", ip: ""} in network mk-functional-750630: {Iface:virbr1 ExpiryTime:2024-09-30 20:55:44 +0000 UTC Type:0 Mac:52:54:00:99:2a:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:functional-750630 Clientid:01:52:54:00:99:2a:57}
I0930 19:58:57.533684   25558 main.go:141] libmachine: (functional-750630) DBG | domain functional-750630 has defined IP address 192.168.39.202 and MAC address 52:54:00:99:2a:57 in network mk-functional-750630
I0930 19:58:57.533865   25558 main.go:141] libmachine: (functional-750630) Calling .GetSSHPort
I0930 19:58:57.534053   25558 main.go:141] libmachine: (functional-750630) Calling .GetSSHKeyPath
I0930 19:58:57.534190   25558 main.go:141] libmachine: (functional-750630) Calling .GetSSHUsername
I0930 19:58:57.534353   25558 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/functional-750630/id_rsa Username:docker}
I0930 19:58:57.618209   25558 build_images.go:161] Building image from path: /tmp/build.3998563819.tar
I0930 19:58:57.618317   25558 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0930 19:58:57.628888   25558 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3998563819.tar
I0930 19:58:57.634158   25558 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3998563819.tar: stat -c "%s %y" /var/lib/minikube/build/build.3998563819.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3998563819.tar': No such file or directory
I0930 19:58:57.634195   25558 ssh_runner.go:362] scp /tmp/build.3998563819.tar --> /var/lib/minikube/build/build.3998563819.tar (3072 bytes)
I0930 19:58:57.660196   25558 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3998563819
I0930 19:58:57.670216   25558 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3998563819 -xf /var/lib/minikube/build/build.3998563819.tar
I0930 19:58:57.683445   25558 crio.go:315] Building image: /var/lib/minikube/build/build.3998563819
I0930 19:58:57.683545   25558 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-750630 /var/lib/minikube/build/build.3998563819 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0930 19:59:01.470475   25558 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-750630 /var/lib/minikube/build/build.3998563819 --cgroup-manager=cgroupfs: (3.786906427s)
I0930 19:59:01.470563   25558 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3998563819
I0930 19:59:01.503991   25558 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3998563819.tar
I0930 19:59:01.531025   25558 build_images.go:217] Built localhost/my-image:functional-750630 from /tmp/build.3998563819.tar
I0930 19:59:01.531059   25558 build_images.go:133] succeeded building to: functional-750630
I0930 19:59:01.531065   25558 build_images.go:134] failed building to: 
I0930 19:59:01.531118   25558 main.go:141] libmachine: Making call to close driver server
I0930 19:59:01.531131   25558 main.go:141] libmachine: (functional-750630) Calling .Close
I0930 19:59:01.531441   25558 main.go:141] libmachine: Successfully made call to close driver server
I0930 19:59:01.531458   25558 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 19:59:01.531469   25558 main.go:141] libmachine: Making call to close driver server
I0930 19:59:01.531476   25558 main.go:141] libmachine: (functional-750630) Calling .Close
I0930 19:59:01.533138   25558 main.go:141] libmachine: (functional-750630) DBG | Closing plugin on server side
I0930 19:59:01.533145   25558 main.go:141] libmachine: Successfully made call to close driver server
I0930 19:59:01.533159   25558 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.843220701s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-750630
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image load --daemon kicbase/echo-server:functional-750630 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-750630 image load --daemon kicbase/echo-server:functional-750630 --alsologtostderr: (1.253424685s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image load --daemon kicbase/echo-server:functional-750630 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-750630
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image load --daemon kicbase/echo-server:functional-750630 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-750630 image load --daemon kicbase/echo-server:functional-750630 --alsologtostderr: (7.45128249s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image save kicbase/echo-server:functional-750630 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image rm kicbase/echo-server:functional-750630 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-750630 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.585244178s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-750630
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 image save --daemon kicbase/echo-server:functional-750630 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-750630
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-750630 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-750630 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-r2hql" [ef0bc9b5-9d2e-4886-a426-650a38e84534] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-r2hql" [ef0bc9b5-9d2e-4886-a426-650a38e84534] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004651056s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "326.327023ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "49.774663ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "344.647077ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "47.193078ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-750630 /tmp/TestFunctionalparallelMountCmdany-port1687983193/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727726332859547552" to /tmp/TestFunctionalparallelMountCmdany-port1687983193/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727726332859547552" to /tmp/TestFunctionalparallelMountCmdany-port1687983193/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727726332859547552" to /tmp/TestFunctionalparallelMountCmdany-port1687983193/001/test-1727726332859547552
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750630 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (202.626955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 19:58:53.062474   14875 retry.go:31] will retry after 380.743755ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 30 19:58 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 30 19:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 30 19:58 test-1727726332859547552
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh cat /mount-9p/test-1727726332859547552
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-750630 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4824b23a-5bf5-4f15-998b-ee456bdc4786] Pending
helpers_test.go:344: "busybox-mount" [4824b23a-5bf5-4f15-998b-ee456bdc4786] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4824b23a-5bf5-4f15-998b-ee456bdc4786] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4824b23a-5bf5-4f15-998b-ee456bdc4786] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004170394s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-750630 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-750630 /tmp/TestFunctionalparallelMountCmdany-port1687983193/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 service list -o json
functional_test.go:1494: Took "457.606174ms" to run "out/minikube-linux-amd64 -p functional-750630 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.202:32230
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.202:32230
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-750630 /tmp/TestFunctionalparallelMountCmdspecific-port2694124630/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750630 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (229.823418ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 19:59:01.671311   14875 retry.go:31] will retry after 375.718438ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-750630 /tmp/TestFunctionalparallelMountCmdspecific-port2694124630/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750630 ssh "sudo umount -f /mount-9p": exit status 1 (220.676011ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-750630 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-750630 /tmp/TestFunctionalparallelMountCmdspecific-port2694124630/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-750630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup474807194/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-750630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup474807194/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-750630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup474807194/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750630 ssh "findmnt -T" /mount1: exit status 1 (294.157496ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 19:59:03.596144   14875 retry.go:31] will retry after 636.025489ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-750630 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-750630 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-750630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup474807194/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-750630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup474807194/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-750630 /tmp/TestFunctionalparallelMountCmdVerifyCleanup474807194/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2024/09/30 19:59:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-750630
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-750630
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-750630
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (193.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-805293 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0930 20:00:55.310577   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:01:23.014308   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-805293 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m12.425716897s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (193.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-805293 -- rollout status deployment/busybox: (4.924188165s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-lshpm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-nfncv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-r27jf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-lshpm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-nfncv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-r27jf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-lshpm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-nfncv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-r27jf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-lshpm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-lshpm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-nfncv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-nfncv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-r27jf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-805293 -- exec busybox-7dff88458-r27jf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-805293 -v=7 --alsologtostderr
E0930 20:03:28.936194   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:03:28.942626   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:03:28.954104   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:03:28.975582   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:03:29.016992   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:03:29.098882   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:03:29.260298   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:03:29.582118   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:03:30.223708   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:03:31.505467   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-805293 -v=7 --alsologtostderr: (56.083254788s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr
E0930 20:03:34.066775   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-805293 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp testdata/cp-test.txt ha-805293:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3144947660/001/cp-test_ha-805293.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293:/home/docker/cp-test.txt ha-805293-m02:/home/docker/cp-test_ha-805293_ha-805293-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m02 "sudo cat /home/docker/cp-test_ha-805293_ha-805293-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293:/home/docker/cp-test.txt ha-805293-m03:/home/docker/cp-test_ha-805293_ha-805293-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m03 "sudo cat /home/docker/cp-test_ha-805293_ha-805293-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293:/home/docker/cp-test.txt ha-805293-m04:/home/docker/cp-test_ha-805293_ha-805293-m04.txt
E0930 20:03:39.188389   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m04 "sudo cat /home/docker/cp-test_ha-805293_ha-805293-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp testdata/cp-test.txt ha-805293-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3144947660/001/cp-test_ha-805293-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293-m02:/home/docker/cp-test.txt ha-805293:/home/docker/cp-test_ha-805293-m02_ha-805293.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293 "sudo cat /home/docker/cp-test_ha-805293-m02_ha-805293.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293-m02:/home/docker/cp-test.txt ha-805293-m03:/home/docker/cp-test_ha-805293-m02_ha-805293-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m03 "sudo cat /home/docker/cp-test_ha-805293-m02_ha-805293-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293-m02:/home/docker/cp-test.txt ha-805293-m04:/home/docker/cp-test_ha-805293-m02_ha-805293-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m04 "sudo cat /home/docker/cp-test_ha-805293-m02_ha-805293-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp testdata/cp-test.txt ha-805293-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3144947660/001/cp-test_ha-805293-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt ha-805293:/home/docker/cp-test_ha-805293-m03_ha-805293.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293 "sudo cat /home/docker/cp-test_ha-805293-m03_ha-805293.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt ha-805293-m02:/home/docker/cp-test_ha-805293-m03_ha-805293-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m02 "sudo cat /home/docker/cp-test_ha-805293-m03_ha-805293-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293-m03:/home/docker/cp-test.txt ha-805293-m04:/home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m04 "sudo cat /home/docker/cp-test_ha-805293-m03_ha-805293-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp testdata/cp-test.txt ha-805293-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3144947660/001/cp-test_ha-805293-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt ha-805293:/home/docker/cp-test_ha-805293-m04_ha-805293.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293 "sudo cat /home/docker/cp-test_ha-805293-m04_ha-805293.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt ha-805293-m02:/home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m02 "sudo cat /home/docker/cp-test_ha-805293-m04_ha-805293-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 cp ha-805293-m04:/home/docker/cp-test.txt ha-805293-m03:/home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 ssh -n ha-805293-m03 "sudo cat /home/docker/cp-test_ha-805293-m04_ha-805293-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-805293 node delete m03 -v=7 --alsologtostderr: (15.99219713s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (318.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-805293 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0930 20:15:55.311833   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:18:28.938716   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:19:51.999387   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:20:55.313033   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-805293 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m17.918586843s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (318.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-805293 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-805293 --control-plane -v=7 --alsologtostderr: (1m17.531054613s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-805293 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (90.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-757515 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0930 20:23:28.936164   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-757515 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m30.734558462s)
--- PASS: TestJSONOutput/start/Command (90.74s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-757515 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-757515 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.64s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-757515 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-757515 --output=json --user=testUser: (6.638345108s)
--- PASS: TestJSONOutput/stop/Command (6.64s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-644909 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-644909 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.750766ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e392005f-5b8f-43c2-81af-51809123c993","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-644909] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b7fc585a-0eac-4e13-9ba0-126768559020","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19736"}}
	{"specversion":"1.0","id":"920c1a08-a4d8-4c1c-a664-26602b6ab78e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"af93b2a3-867a-45f3-816f-86bcfe76ead2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig"}}
	{"specversion":"1.0","id":"dccd1f11-5829-4a86-8944-204cc65b37e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube"}}
	{"specversion":"1.0","id":"80b04b0a-8ca4-4a63-acf1-480d612dc493","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9581cc26-9206-4aff-8d9b-82cb5ff770d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6ccb59fb-7d4c-4735-a153-81195a0baa62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-644909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-644909
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (90.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-947986 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-947986 --driver=kvm2  --container-runtime=crio: (43.720715838s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-961138 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-961138 --driver=kvm2  --container-runtime=crio: (43.553143153s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-947986
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-961138
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-961138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-961138
helpers_test.go:175: Cleaning up "first-947986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-947986
--- PASS: TestMinikubeProfile (90.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-856605 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0930 20:25:55.316358   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-856605 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.350048562s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-856605 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-856605 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-871322 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-871322 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.69209153s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-871322 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-871322 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-856605 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-871322 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-871322 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-871322
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-871322: (1.277461738s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-871322
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-871322: (21.937195043s)
--- PASS: TestMountStart/serial/RestartStopped (22.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-871322 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-871322 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103579 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0930 20:28:28.936098   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:28:58.377612   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-103579 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m18.012527325s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-103579 -- rollout status deployment/busybox: (4.797354011s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- exec busybox-7dff88458-shfkv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- exec busybox-7dff88458-vxgwt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- exec busybox-7dff88458-shfkv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- exec busybox-7dff88458-vxgwt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- exec busybox-7dff88458-shfkv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- exec busybox-7dff88458-vxgwt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- exec busybox-7dff88458-shfkv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- exec busybox-7dff88458-shfkv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- exec busybox-7dff88458-vxgwt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103579 -- exec busybox-7dff88458-vxgwt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-103579 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-103579 -v 3 --alsologtostderr: (47.599976815s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.18s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-103579 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 cp testdata/cp-test.txt multinode-103579:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 cp multinode-103579:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3699104417/001/cp-test_multinode-103579.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 cp multinode-103579:/home/docker/cp-test.txt multinode-103579-m02:/home/docker/cp-test_multinode-103579_multinode-103579-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579-m02 "sudo cat /home/docker/cp-test_multinode-103579_multinode-103579-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 cp multinode-103579:/home/docker/cp-test.txt multinode-103579-m03:/home/docker/cp-test_multinode-103579_multinode-103579-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579-m03 "sudo cat /home/docker/cp-test_multinode-103579_multinode-103579-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 cp testdata/cp-test.txt multinode-103579-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 cp multinode-103579-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3699104417/001/cp-test_multinode-103579-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 cp multinode-103579-m02:/home/docker/cp-test.txt multinode-103579:/home/docker/cp-test_multinode-103579-m02_multinode-103579.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579 "sudo cat /home/docker/cp-test_multinode-103579-m02_multinode-103579.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 cp multinode-103579-m02:/home/docker/cp-test.txt multinode-103579-m03:/home/docker/cp-test_multinode-103579-m02_multinode-103579-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579-m03 "sudo cat /home/docker/cp-test_multinode-103579-m02_multinode-103579-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 cp testdata/cp-test.txt multinode-103579-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 cp multinode-103579-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3699104417/001/cp-test_multinode-103579-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 cp multinode-103579-m03:/home/docker/cp-test.txt multinode-103579:/home/docker/cp-test_multinode-103579-m03_multinode-103579.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579 "sudo cat /home/docker/cp-test_multinode-103579-m03_multinode-103579.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 cp multinode-103579-m03:/home/docker/cp-test.txt multinode-103579-m02:/home/docker/cp-test_multinode-103579-m03_multinode-103579-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 ssh -n multinode-103579-m02 "sudo cat /home/docker/cp-test_multinode-103579-m03_multinode-103579-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-103579 node stop m03: (1.40898248s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-103579 status: exit status 7 (431.355139ms)

                                                
                                                
-- stdout --
	multinode-103579
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-103579-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-103579-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-103579 status --alsologtostderr: exit status 7 (423.246049ms)

                                                
                                                
-- stdout --
	multinode-103579
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-103579-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-103579-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 20:30:23.947891   43504 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:30:23.947990   43504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:30:23.947998   43504 out.go:358] Setting ErrFile to fd 2...
	I0930 20:30:23.948002   43504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:30:23.948218   43504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:30:23.948380   43504 out.go:352] Setting JSON to false
	I0930 20:30:23.948405   43504 mustload.go:65] Loading cluster: multinode-103579
	I0930 20:30:23.948548   43504 notify.go:220] Checking for updates...
	I0930 20:30:23.948798   43504 config.go:182] Loaded profile config "multinode-103579": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:30:23.948813   43504 status.go:174] checking status of multinode-103579 ...
	I0930 20:30:23.949229   43504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:30:23.949271   43504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:30:23.966251   43504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38927
	I0930 20:30:23.966717   43504 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:30:23.967342   43504 main.go:141] libmachine: Using API Version  1
	I0930 20:30:23.967368   43504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:30:23.967772   43504 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:30:23.967965   43504 main.go:141] libmachine: (multinode-103579) Calling .GetState
	I0930 20:30:23.969724   43504 status.go:371] multinode-103579 host status = "Running" (err=<nil>)
	I0930 20:30:23.969737   43504 host.go:66] Checking if "multinode-103579" exists ...
	I0930 20:30:23.970160   43504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:30:23.970211   43504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:30:23.985749   43504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42481
	I0930 20:30:23.986228   43504 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:30:23.986816   43504 main.go:141] libmachine: Using API Version  1
	I0930 20:30:23.986841   43504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:30:23.987161   43504 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:30:23.987359   43504 main.go:141] libmachine: (multinode-103579) Calling .GetIP
	I0930 20:30:23.990516   43504 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:30:23.990946   43504 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:30:23.990966   43504 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:30:23.991116   43504 host.go:66] Checking if "multinode-103579" exists ...
	I0930 20:30:23.991542   43504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:30:23.991590   43504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:30:24.007267   43504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38621
	I0930 20:30:24.007755   43504 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:30:24.008249   43504 main.go:141] libmachine: Using API Version  1
	I0930 20:30:24.008275   43504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:30:24.008621   43504 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:30:24.008819   43504 main.go:141] libmachine: (multinode-103579) Calling .DriverName
	I0930 20:30:24.009020   43504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 20:30:24.009039   43504 main.go:141] libmachine: (multinode-103579) Calling .GetSSHHostname
	I0930 20:30:24.012159   43504 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:30:24.012555   43504 main.go:141] libmachine: (multinode-103579) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:a2:66", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:27:14 +0000 UTC Type:0 Mac:52:54:00:b6:a2:66 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-103579 Clientid:01:52:54:00:b6:a2:66}
	I0930 20:30:24.012576   43504 main.go:141] libmachine: (multinode-103579) DBG | domain multinode-103579 has defined IP address 192.168.39.58 and MAC address 52:54:00:b6:a2:66 in network mk-multinode-103579
	I0930 20:30:24.012751   43504 main.go:141] libmachine: (multinode-103579) Calling .GetSSHPort
	I0930 20:30:24.012922   43504 main.go:141] libmachine: (multinode-103579) Calling .GetSSHKeyPath
	I0930 20:30:24.013051   43504 main.go:141] libmachine: (multinode-103579) Calling .GetSSHUsername
	I0930 20:30:24.013189   43504 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/multinode-103579/id_rsa Username:docker}
	I0930 20:30:24.094325   43504 ssh_runner.go:195] Run: systemctl --version
	I0930 20:30:24.100668   43504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:30:24.113703   43504 kubeconfig.go:125] found "multinode-103579" server: "https://192.168.39.58:8443"
	I0930 20:30:24.113731   43504 api_server.go:166] Checking apiserver status ...
	I0930 20:30:24.113761   43504 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 20:30:24.129280   43504 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1110/cgroup
	W0930 20:30:24.138037   43504 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1110/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0930 20:30:24.138096   43504 ssh_runner.go:195] Run: ls
	I0930 20:30:24.142603   43504 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0930 20:30:24.147755   43504 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0930 20:30:24.147775   43504 status.go:463] multinode-103579 apiserver status = Running (err=<nil>)
	I0930 20:30:24.147784   43504 status.go:176] multinode-103579 status: &{Name:multinode-103579 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 20:30:24.147797   43504 status.go:174] checking status of multinode-103579-m02 ...
	I0930 20:30:24.148077   43504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:30:24.148113   43504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:30:24.163557   43504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41943
	I0930 20:30:24.164014   43504 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:30:24.164523   43504 main.go:141] libmachine: Using API Version  1
	I0930 20:30:24.164548   43504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:30:24.164846   43504 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:30:24.165058   43504 main.go:141] libmachine: (multinode-103579-m02) Calling .GetState
	I0930 20:30:24.166651   43504 status.go:371] multinode-103579-m02 host status = "Running" (err=<nil>)
	I0930 20:30:24.166664   43504 host.go:66] Checking if "multinode-103579-m02" exists ...
	I0930 20:30:24.166990   43504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:30:24.167022   43504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:30:24.182426   43504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I0930 20:30:24.182923   43504 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:30:24.183495   43504 main.go:141] libmachine: Using API Version  1
	I0930 20:30:24.183519   43504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:30:24.183905   43504 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:30:24.184089   43504 main.go:141] libmachine: (multinode-103579-m02) Calling .GetIP
	I0930 20:30:24.186920   43504 main.go:141] libmachine: (multinode-103579-m02) DBG | domain multinode-103579-m02 has defined MAC address 52:54:00:d3:38:7d in network mk-multinode-103579
	I0930 20:30:24.187338   43504 main.go:141] libmachine: (multinode-103579-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:38:7d", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:28:41 +0000 UTC Type:0 Mac:52:54:00:d3:38:7d Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-103579-m02 Clientid:01:52:54:00:d3:38:7d}
	I0930 20:30:24.187360   43504 main.go:141] libmachine: (multinode-103579-m02) DBG | domain multinode-103579-m02 has defined IP address 192.168.39.212 and MAC address 52:54:00:d3:38:7d in network mk-multinode-103579
	I0930 20:30:24.187501   43504 host.go:66] Checking if "multinode-103579-m02" exists ...
	I0930 20:30:24.187822   43504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:30:24.187860   43504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:30:24.203334   43504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37559
	I0930 20:30:24.203774   43504 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:30:24.204251   43504 main.go:141] libmachine: Using API Version  1
	I0930 20:30:24.204276   43504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:30:24.204554   43504 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:30:24.204749   43504 main.go:141] libmachine: (multinode-103579-m02) Calling .DriverName
	I0930 20:30:24.204972   43504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 20:30:24.205002   43504 main.go:141] libmachine: (multinode-103579-m02) Calling .GetSSHHostname
	I0930 20:30:24.208534   43504 main.go:141] libmachine: (multinode-103579-m02) DBG | domain multinode-103579-m02 has defined MAC address 52:54:00:d3:38:7d in network mk-multinode-103579
	I0930 20:30:24.208977   43504 main.go:141] libmachine: (multinode-103579-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:38:7d", ip: ""} in network mk-multinode-103579: {Iface:virbr1 ExpiryTime:2024-09-30 21:28:41 +0000 UTC Type:0 Mac:52:54:00:d3:38:7d Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-103579-m02 Clientid:01:52:54:00:d3:38:7d}
	I0930 20:30:24.209002   43504 main.go:141] libmachine: (multinode-103579-m02) DBG | domain multinode-103579-m02 has defined IP address 192.168.39.212 and MAC address 52:54:00:d3:38:7d in network mk-multinode-103579
	I0930 20:30:24.209202   43504 main.go:141] libmachine: (multinode-103579-m02) Calling .GetSSHPort
	I0930 20:30:24.209328   43504 main.go:141] libmachine: (multinode-103579-m02) Calling .GetSSHKeyPath
	I0930 20:30:24.209432   43504 main.go:141] libmachine: (multinode-103579-m02) Calling .GetSSHUsername
	I0930 20:30:24.209569   43504 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-7672/.minikube/machines/multinode-103579-m02/id_rsa Username:docker}
	I0930 20:30:24.294148   43504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 20:30:24.308476   43504 status.go:176] multinode-103579-m02 status: &{Name:multinode-103579-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0930 20:30:24.308516   43504 status.go:174] checking status of multinode-103579-m03 ...
	I0930 20:30:24.308855   43504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 20:30:24.308902   43504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 20:30:24.324209   43504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43407
	I0930 20:30:24.324699   43504 main.go:141] libmachine: () Calling .GetVersion
	I0930 20:30:24.325165   43504 main.go:141] libmachine: Using API Version  1
	I0930 20:30:24.325184   43504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 20:30:24.325548   43504 main.go:141] libmachine: () Calling .GetMachineName
	I0930 20:30:24.325741   43504 main.go:141] libmachine: (multinode-103579-m03) Calling .GetState
	I0930 20:30:24.327455   43504 status.go:371] multinode-103579-m03 host status = "Stopped" (err=<nil>)
	I0930 20:30:24.327472   43504 status.go:384] host is not running, skipping remaining checks
	I0930 20:30:24.327479   43504 status.go:176] multinode-103579-m03 status: &{Name:multinode-103579-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 node start m03 -v=7 --alsologtostderr
E0930 20:30:55.310799   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-103579 node start m03 -v=7 --alsologtostderr: (38.266768593s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-103579 node delete m03: (1.895588684s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (177.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103579 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0930 20:40:55.310852   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-103579 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m57.127353676s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103579 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (177.65s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-103579
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103579-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-103579-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.341817ms)

                                                
                                                
-- stdout --
	* [multinode-103579-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-103579-m02' is duplicated with machine name 'multinode-103579-m02' in profile 'multinode-103579'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103579-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-103579-m03 --driver=kvm2  --container-runtime=crio: (44.413964995s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-103579
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-103579: exit status 80 (214.612673ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-103579 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-103579-m03 already exists in multinode-103579-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-103579-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.73s)

                                                
                                    
x
+
TestScheduledStopUnix (111.5s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-464000 --memory=2048 --driver=kvm2  --container-runtime=crio
E0930 20:45:38.379721   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:45:55.315709   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-464000 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.890116268s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-464000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-464000 -n scheduled-stop-464000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-464000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0930 20:46:15.562289   14875 retry.go:31] will retry after 139.845µs: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.563463   14875 retry.go:31] will retry after 140.779µs: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.564606   14875 retry.go:31] will retry after 226.807µs: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.565745   14875 retry.go:31] will retry after 250.053µs: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.566873   14875 retry.go:31] will retry after 504.423µs: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.567996   14875 retry.go:31] will retry after 819.683µs: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.569125   14875 retry.go:31] will retry after 1.140326ms: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.571342   14875 retry.go:31] will retry after 1.395645ms: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.573539   14875 retry.go:31] will retry after 2.727444ms: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.576760   14875 retry.go:31] will retry after 2.968344ms: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.579974   14875 retry.go:31] will retry after 5.275423ms: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.586161   14875 retry.go:31] will retry after 10.42362ms: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.597505   14875 retry.go:31] will retry after 8.959222ms: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.606746   14875 retry.go:31] will retry after 14.524432ms: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.621994   14875 retry.go:31] will retry after 15.290353ms: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
I0930 20:46:15.638280   14875 retry.go:31] will retry after 34.038624ms: open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/scheduled-stop-464000/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-464000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-464000 -n scheduled-stop-464000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-464000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-464000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-464000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-464000: exit status 7 (64.969812ms)

                                                
                                                
-- stdout --
	scheduled-stop-464000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-464000 -n scheduled-stop-464000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-464000 -n scheduled-stop-464000: exit status 7 (64.664616ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-464000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-464000
--- PASS: TestScheduledStopUnix (111.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (196.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.829817047 start -p running-upgrade-456540 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.829817047 start -p running-upgrade-456540 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m47.479323975s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-456540 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-456540 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m25.611718097s)
helpers_test.go:175: Cleaning up "running-upgrade-456540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-456540
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-456540: (1.153526411s)
--- PASS: TestRunningBinaryUpgrade (196.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-592556 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-592556 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (84.362174ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-592556] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-592556 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-592556 --driver=kvm2  --container-runtime=crio: (1m33.756975715s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-592556 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-207733 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-207733 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (100.842012ms)

                                                
                                                
-- stdout --
	* [false-207733] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 20:47:29.807958   51154 out.go:345] Setting OutFile to fd 1 ...
	I0930 20:47:29.808072   51154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:47:29.808082   51154 out.go:358] Setting ErrFile to fd 2...
	I0930 20:47:29.808086   51154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 20:47:29.808245   51154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-7672/.minikube/bin
	I0930 20:47:29.808855   51154 out.go:352] Setting JSON to false
	I0930 20:47:29.809819   51154 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5393,"bootTime":1727723857,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 20:47:29.809917   51154 start.go:139] virtualization: kvm guest
	I0930 20:47:29.811920   51154 out.go:177] * [false-207733] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 20:47:29.813251   51154 out.go:177]   - MINIKUBE_LOCATION=19736
	I0930 20:47:29.813252   51154 notify.go:220] Checking for updates...
	I0930 20:47:29.815498   51154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 20:47:29.816663   51154 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-7672/kubeconfig
	I0930 20:47:29.817849   51154 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-7672/.minikube
	I0930 20:47:29.819080   51154 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 20:47:29.820479   51154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 20:47:29.822517   51154 config.go:182] Loaded profile config "NoKubernetes-592556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:47:29.822680   51154 config.go:182] Loaded profile config "force-systemd-env-618322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:47:29.822818   51154 config.go:182] Loaded profile config "offline-crio-579164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 20:47:29.822947   51154 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 20:47:29.859501   51154 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 20:47:29.861008   51154 start.go:297] selected driver: kvm2
	I0930 20:47:29.861029   51154 start.go:901] validating driver "kvm2" against <nil>
	I0930 20:47:29.861041   51154 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 20:47:29.863032   51154 out.go:201] 
	W0930 20:47:29.864498   51154 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0930 20:47:29.865906   51154 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-207733 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-207733

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-207733

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-207733

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-207733

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-207733

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-207733

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-207733

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-207733

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-207733

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-207733

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-207733

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-207733" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-207733" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-207733

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207733"

                                                
                                                
----------------------- debugLogs end: false-207733 [took: 2.739308667s] --------------------------------
helpers_test.go:175: Cleaning up "false-207733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-207733
--- PASS: TestNetworkPlugins/group/false (2.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (143.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2576984097 start -p stopped-upgrade-241258 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0930 20:48:28.935764   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2576984097 start -p stopped-upgrade-241258 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m33.208373042s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2576984097 -p stopped-upgrade-241258 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2576984097 -p stopped-upgrade-241258 stop: (1.458581859s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-241258 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-241258 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.35908855s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (143.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (70.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-592556 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-592556 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m9.600905624s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-592556 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-592556 status -o json: exit status 2 (253.234955ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-592556","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-592556
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-592556: (1.051869304s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (70.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (43.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-592556 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-592556 --no-kubernetes --driver=kvm2  --container-runtime=crio: (43.566054676s)
--- PASS: TestNoKubernetes/serial/Start (43.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-241258
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-592556 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-592556 "sudo systemctl is-active --quiet service kubelet": exit status 1 (253.024571ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
I0930 20:50:55.812967   14875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0930 20:50:55.813050   14875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0930 20:50:55.844803   14875 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0930 20:50:55.844843   14875 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0930 20:50:55.844924   14875 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0930 20:50:55.844957   14875 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1716559435/002/docker-machine-driver-kvm2
I0930 20:50:55.877262   14875 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1716559435/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640] Decompressors:map[bz2:0xc00046f7d0 gz:0xc00046f7d8 tar:0xc00046f6d0 tar.bz2:0xc00046f700 tar.gz:0xc00046f710 tar.xz:0xc00046f730 tar.zst:0xc00046f790 tbz2:0xc00046f700 tgz:0xc00046f710 txz:0xc00046f730 tzst:0xc00046f790 xz:0xc00046f7e0 zip:0xc00046f840 zst:0xc00046f7e8] Getters:map[file:0xc001ce2a30 http:0xc0006ae730 https:0xc0006ae780] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0930 20:50:55.877306   14875 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1716559435/002/docker-machine-driver-kvm2
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.026352688s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-592556
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-592556: (1.822944103s)
--- PASS: TestNoKubernetes/serial/Stop (1.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (47.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-592556 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-592556 --driver=kvm2  --container-runtime=crio: (47.978290303s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (47.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-592556 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-592556 "sudo systemctl is-active --quiet service kubelet": exit status 1 (220.011663ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Start (98.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-617008 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-617008 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m38.023625226s)
--- PASS: TestPause/serial/Start (98.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (120.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m0.185685777s)
--- PASS: TestNetworkPlugins/group/auto/Start (120.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-207733 "pgrep -a kubelet"
I0930 20:54:31.734794   14875 config.go:182] Loaded profile config "auto-207733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-207733 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qf4v8" [911a21a0-0370-40da-9162-c4468d5cebaf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qf4v8" [911a21a0-0370-40da-9162-c4468d5cebaf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004739233s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-207733 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m1.141170637s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m40.063655085s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (115.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m55.969705538s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (115.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-k6wfc" [415bc654-34f8-43ba-914b-64edd050f8ae] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004647278s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-207733 "pgrep -a kubelet"
I0930 20:55:58.498971   14875 config.go:182] Loaded profile config "kindnet-207733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-207733 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fsw8v" [ff596061-b71d-4aee-93b2-9083de4231f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fsw8v" [ff596061-b71d-4aee-93b2-9083de4231f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006118074s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-207733 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m23.559364435s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-98fd7" [8d6031d5-34a0-4f73-91cc-2ca408894a7c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006126968s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m28.144876427s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-207733 "pgrep -a kubelet"
I0930 20:56:40.801778   14875 config.go:182] Loaded profile config "calico-207733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-207733 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5592k" [17137eec-60d0-4275-83aa-dfd548c7e75e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5592k" [17137eec-60d0-4275-83aa-dfd548c7e75e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004563998s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-207733 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-207733 "pgrep -a kubelet"
I0930 20:56:55.506228   14875 config.go:182] Loaded profile config "custom-flannel-207733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-207733 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hq4kj" [229fc709-b510-41e8-8efa-3ad09de6abfa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hq4kj" [229fc709-b510-41e8-8efa-3ad09de6abfa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005793822s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-207733 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-207733 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m4.920739661s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-207733 "pgrep -a kubelet"
I0930 20:57:50.815292   14875 config.go:182] Loaded profile config "enable-default-cni-207733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-207733 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-207733 replace --force -f testdata/netcat-deployment.yaml: (1.020739578s)
I0930 20:57:51.855128   14875 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z2546" [2c8fba3d-f1df-4159-976c-d881d9e8c7ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z2546" [2c8fba3d-f1df-4159-976c-d881d9e8c7ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005488333s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-207733 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qcgpp" [374814fb-ece6-4f9f-88cb-88e1a1a45f16] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00843473s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-207733 "pgrep -a kubelet"
I0930 20:58:14.671774   14875 config.go:182] Loaded profile config "flannel-207733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-207733 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-htx4l" [7b6d9b12-fbd6-4ad9-a509-ecd614f8dba6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-htx4l" [7b6d9b12-fbd6-4ad9-a509-ecd614f8dba6] Running
E0930 20:58:28.935819   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.003759576s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-207733 "pgrep -a kubelet"
I0930 20:58:15.204080   14875 config.go:182] Loaded profile config "bridge-207733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-207733 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8slr2" [a925e148-2ad9-47dc-9676-fb1617d87004] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8slr2" [a925e148-2ad9-47dc-9676-fb1617d87004] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.004868331s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (100.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-997816 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-997816 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m40.466564008s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (100.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-207733 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-207733 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-207733 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)
E0930 21:28:28.935859   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/functional-750630/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (64.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-256103 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-256103 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m4.496401036s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (64.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (118.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-291511 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0930 20:59:31.997949   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:59:32.004443   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:59:32.015900   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:59:32.037364   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:59:32.078835   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:59:32.160333   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:59:32.321979   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:59:32.643722   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:59:33.285349   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:59:34.567022   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:59:37.128772   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 20:59:42.250702   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-291511 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m58.418457174s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (118.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-256103 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f475e8fd-ff87-4e48-b6ff-041b74676bfc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0930 20:59:52.492281   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [f475e8fd-ff87-4e48-b6ff-041b74676bfc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.005217759s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-256103 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-256103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-256103 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-997816 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2f0eedc3-2026-4ba3-ac8e-784be7e51dbf] Pending
helpers_test.go:344: "busybox" [2f0eedc3-2026-4ba3-ac8e-784be7e51dbf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2f0eedc3-2026-4ba3-ac8e-784be7e51dbf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004093325s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-997816 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-997816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0930 21:00:12.973799   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-997816 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-291511 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [34406fdf-7b58-4457-ae9f-712885f7dd29] Pending
helpers_test.go:344: "busybox" [34406fdf-7b58-4457-ae9f-712885f7dd29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [34406fdf-7b58-4457-ae9f-712885f7dd29] Running
E0930 21:00:52.286367   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:00:52.292796   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:00:52.304173   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:00:52.325679   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:00:52.367118   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:00:52.448594   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:00:52.610849   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:00:52.932717   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:00:53.574802   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:00:53.935133   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/auto-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:00:54.856770   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:00:55.310665   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/addons-857381/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:00:57.418216   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/kindnet-207733/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.004801781s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-291511 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-291511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-291511 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (668.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-256103 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-256103 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (11m8.305911887s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256103 -n embed-certs-256103
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (668.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (574.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-997816 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0930 21:02:51.838958   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:51.845383   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:51.856783   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:51.878159   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:51.919633   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:52.001361   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:52.162936   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:52.485196   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:53.127382   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:54.409221   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:56.528538   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/calico-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:02:56.971420   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-997816 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m33.914263583s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997816 -n no-preload-997816
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (574.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (537.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-291511 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0930 21:03:32.817154   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-291511 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (8m56.919824232s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-291511 -n default-k8s-diff-port-291511
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (537.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-621406 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-621406 --alsologtostderr -v=3: (6.302752167s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-621406 -n old-k8s-version-621406: exit status 7 (61.003936ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-621406 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-921796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0930 21:27:51.839297   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/enable-default-cni-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:28:08.419490   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/flannel-207733/client.crt: no such file or directory" logger="UnhandledError"
E0930 21:28:15.484216   14875 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-7672/.minikube/profiles/bridge-207733/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-921796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (46.730248078s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-921796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-921796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.017502354s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-921796 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-921796 --alsologtostderr -v=3: (10.509246503s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-921796 -n newest-cni-921796
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-921796 -n newest-cni-921796: exit status 7 (70.117953ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-921796 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-921796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-921796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (35.280673548s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-921796 -n newest-cni-921796
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-921796 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-921796 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-921796 -n newest-cni-921796
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-921796 -n newest-cni-921796: exit status 2 (229.750296ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-921796 -n newest-cni-921796
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-921796 -n newest-cni-921796: exit status 2 (226.094163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-921796 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-921796 -n newest-cni-921796
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-921796 -n newest-cni-921796
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.28s)

                                                
                                    

Test skip (37/311)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
37 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
253 TestNetworkPlugins/group/kubenet 2.93
262 TestNetworkPlugins/group/cilium 3.77
273 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-207733 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-207733

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-207733

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-207733

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-207733

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-207733

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-207733

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-207733

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-207733

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-207733

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-207733

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-207733

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-207733" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-207733" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-207733

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207733"

                                                
                                                
----------------------- debugLogs end: kubenet-207733 [took: 2.782703984s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-207733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-207733
--- SKIP: TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-207733 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-207733" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-207733

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-207733" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207733"

                                                
                                                
----------------------- debugLogs end: cilium-207733 [took: 3.616949635s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-207733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-207733
--- SKIP: TestNetworkPlugins/group/cilium (3.77s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-741890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-741890
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard